与核心音频线程进行同步 [英] Synchronising with Core Audio Thread

查看:153
本文介绍了与核心音频线程进行同步的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是ioUnit的渲染回调音频数据存储到一个循环缓冲区:

  OSStatus ioUnitRenderCallback(
                          无效* inRefCon,
                          AudioUnitRenderActionFlags * ioActionFlags,
                          常量AudioTimeStamp * inTimeStamp,
                          UInt32的inBusNumber,
                          UInt32的inNumberFrames,
                          AudioBufferList * ioData)
{
    OSStatus ERR =诺尔;    AMNAudioController *此=(__bridge AMNAudioController *)inRefCon;    ERR = AudioUnitRender(This.en coderMixerNode->单位,
                          ioActionFlags,
                          inTimeStamp,
                          inBusNumber,
                          inNumberFrames,
                          ioData);    //音频复制到EN codeR缓冲
    TPCircularBufferCopyAudioBufferList(及(这个 - > EN coderBuffer),ioData,inTimeStamp,kTPCircularBufferCopyAll,NULL);    返回ERR;
}

然后我想读出字节循环缓冲区的,它们提供给libLame再到libShout。
我试图启动一个线程,并使用NSCondition让它等到数据可用,但是这会导致各种问题,由于使用的核心音频回调锁。

什么是推荐的方式做到这一点?

先谢谢了。


我如何实施亚当的回答

的更多细节

我最终采取亚当的意见,并实现了它,像这样。

制作

我使用的核心音频TPCircularBufferProduceBytes渲染回调至字节添加到循环缓冲区。在我来说,我有非交错的音频数据,所以我结束了使用两个圆形的缓冲区。

消费者


  1. 我用pthread_create的产生一个新的线程

  2. 在新的线程创建新CFTimer并将其添加到当前
    CFRunLoop(0.005秒的时间间隔似乎是不错的选择)

  3. 我跟当前CFRunLoop运行

  4. 在我的定时器的回调我带code中的音频,并将其发送到服务器(快速返回如果没有数据缓冲)

  5. 我也有5MB的缓冲区大小,这似乎很好地工作(2MB是给我超支)。这似乎有点高:/


解决方案

您是在正确的轨道上,但你并不需要NSCondition。你肯定不希望阻止。您正在使用的循环缓冲区实现锁定自由和应该做的伎俩。在音频渲染回调,将数据放入缓冲区通过调用 TPCircularBufferProduceBytes 。然后在阅读器上下文(计时器回调是好的,因为和HotPaw建议),调用 TPCircularBufferTail 来获得尾指针(读地址),并提供要读取的字节数,然后调用 TPCircularBufferConsume 做的实际读数。现在,你已经做了没有采取任何锁的转移。只要确保你分配缓冲区足够大来处理情况,你的读者线程被由OS无论出于何种原因举行了最坏情况的条件,否则,你可以打一个缓冲区溢出,并会导致数据丢失。

I am using the render callback of the ioUnit to store the audio data into a circular buffer:

OSStatus ioUnitRenderCallback(
                          void *inRefCon,
                          AudioUnitRenderActionFlags *ioActionFlags,
                          const AudioTimeStamp *inTimeStamp,
                          UInt32 inBusNumber,
                          UInt32 inNumberFrames,
                          AudioBufferList *ioData)
{
    OSStatus err = noErr;

    AMNAudioController *This = (__bridge AMNAudioController*)inRefCon;

    err = AudioUnitRender(This.encoderMixerNode->unit,
                          ioActionFlags,
                          inTimeStamp,
                          inBusNumber,
                          inNumberFrames,
                          ioData);

    // Copy the audio to the encoder buffer
    TPCircularBufferCopyAudioBufferList(&(This->encoderBuffer), ioData, inTimeStamp, kTPCircularBufferCopyAll, NULL);

    return err;
}

I then want to read the bytes out of the circular buffer, feed them to libLame and then to libShout. I have tried starting a thread and using NSCondition to make it wait until data is available but this causes all sorts of issues due to using locks on the Core Audio callback.

What would be the recommended way to do this?

Thanks in advance.


More detail on how I implemented Adam's answer

I ended up taking Adam's advice and implemented it like so.

Producer

I use TPCircularBufferProduceBytes in the Core Audio Render callback to add the bytes to the circular buffer. In my case I have non-interleaved audio data so I ended up using two circular buffers.

Consumer

  1. I spawn a new thread using pthread_create
  2. Within the new thread create a new CFTimer and add it to the current CFRunLoop (an interval of 0.005 seconds appears to work well)
  3. I tell the current CFRunLoop to run
  4. Within my timer callback I encode the audio and send it to the server (returning quickly if no data is buffered)
  5. I also have a buffer size of 5MB which appears to work well (2MB was giving me overruns). This does seem a bit high :/

解决方案

You're on the right track, but you don't need NSCondition. You definitely don't want to block. The circular buffer implementation you're using is lock free and should do the trick. In the audio render callback, put the data into the buffer by calling TPCircularBufferProduceBytes. Then in the reader context (a timer callback is good, as hotpaw suggests), call TPCircularBufferTail to get the tail pointer (read address) and number of available bytes to read, and then call TPCircularBufferConsume to do the actual reading. Now you've done the transfer without taking any locks. Just make sure the buffer you allocate is large enough to handle the worst-case condition where your reader thread gets held off by the os for whatever reason, otherwise you can hit a buffer overrun condition and will lose data.

这篇关于与核心音频线程进行同步的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆