与CAPlayThrough示例问题 [英] Issues with CAPlayThrough Example
问题描述
我想了解X $ C $ç核心音频和偶然发现了这个例子:
I am trying to learn Xcode Core Audio and stumbled upon this example:
<一个href=\"https://developer.apple.com/library/mac/sample$c$c/CAPlayThrough/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004443\" rel=\"nofollow\">https://developer.apple.com/library/mac/sample$c$c/CAPlayThrough/Introduction/Intro.html#//apple_ref/doc/uid/DTS10004443
我的目的是捕捉到的原始音频。每次我打了一个破发点,我失去了音讯。由于它使用 CARingBuffer
。
My intention is to capture the raw audio. Everytime I hit a break point, I lose the audio. Since it is using CARingBuffer
.
- 您会如何排除时间factor.I不需要实时音频。
- 由于它使用
CARingBuffer
应该继续写同一个内存位置?那么,为什么不听我的声音?如果我有一个断点?
- How would you remove the time factor.I don't need real-time audio.
- Since it is using
CARingBuffer
it should keep on writing to same memory location? So why don't I hear the audio? If I have a breakpoint?
我读的学习核心音频的书。但是,到目前为止,我无法弄清楚以下code的这一部分:
I am reading the Learning Core Audio book. But, so far I cannot figure out this part of the following code:
CARingBufferError CARingBuffer::Store(const AudioBufferList *abl, UInt32 framesToWrite, SampleTime startWrite)
{
if (framesToWrite == 0)
return kCARingBufferError_OK;
if (framesToWrite > mCapacityFrames)
return kCARingBufferError_TooMuch; // too big!
SampleTime endWrite = startWrite + framesToWrite;
if (startWrite < EndTime()) {
// going backwards, throw everything out
SetTimeBounds(startWrite, startWrite);
} else if (endWrite - StartTime() <= mCapacityFrames) {
// the buffer has not yet wrapped and will not need to
} else {
// advance the start time past the region we are about to overwrite
SampleTime newStart = endWrite - mCapacityFrames; // one buffer of time behind where we're writing
SampleTime newEnd = std::max(newStart, EndTime());
SetTimeBounds(newStart, newEnd);
}
// write the new frames
Byte **buffers = mBuffers;
int nchannels = mNumberChannels;
int offset0, offset1, nbytes;
SampleTime curEnd = EndTime();
if (startWrite > curEnd) {
// we are skipping some samples, so zero the range we are skipping
offset0 = FrameOffset(curEnd);
offset1 = FrameOffset(startWrite);
if (offset0 < offset1)
ZeroRange(buffers, nchannels, offset0, offset1 - offset0);
else {
ZeroRange(buffers, nchannels, offset0, mCapacityBytes - offset0);
ZeroRange(buffers, nchannels, 0, offset1);
}
offset0 = offset1;
} else {
offset0 = FrameOffset(startWrite);
}
offset1 = FrameOffset(endWrite);
if (offset0 < offset1)
StoreABL(buffers, offset0, abl, 0, offset1 - offset0);
else {
nbytes = mCapacityBytes - offset0;
StoreABL(buffers, offset0, abl, 0, nbytes);
StoreABL(buffers, 0, abl, nbytes, offset1);
}
// now update the end time
SetTimeBounds(StartTime(), endWrite);
return kCARingBufferError_OK; // success
}
谢谢!
推荐答案
如果我的理解这个问题很好,信号同时输入单元(制片人)在断点处暂停正在丧失。我presume这可能是预期的行为。 CoreAudio的是一个 拉模型的引擎实时线程运行。这意味着,在某些情况下,你的制作人遇到断点,环形缓冲区清空,输出单元(消费者)继续运行,但会从缓冲什么,而通关链中断,因此可能的沉默。
If I understood the question well, the signal is lost while input unit (producer) being halted on a breakpoint. I presume this may be the expected behavior. CoreAudio is a pull-model engine running of the real time thread. This means under some conditions your producer hits a breakpoint, the ring buffer empties, the output unit (consumer) keeps on running, but gets nothing from the buffer while the playthrough chain is interrupted, hence the possible silence.
也许从这个例子code是不是真的最简单的一个:我看这也归零,如果环形缓冲区溢出得到缓冲的音频/欠载,AFAICT。
在这个问题的长期的原始音频的也是不言自明的,我不知道是什么意思。我建议想学异步I / O使用简单的循环缓冲区。有几个人(无强制性时间值)在GitHub上。
Perhaps this code from the example is not really the simplest one: I see it also zeroes audio buffers if ring buffer gets overrun/underrun, AFAICT. The term "raw audio" in the question is also not self-explanatory, I'm not sure what does it mean. I would suggest trying to learn async i/o using simpler circular buffers. There are few of them (without obligatory time values) on GitHub.
请还这么好心来格式化源$ C $ C,方便阅读。
Please also be so kind to format the source code for easier reading.
这篇关于与CAPlayThrough示例问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!