在iOS上从Audio Unit Render Proc触发UI代码 [英] Triggering UI code from Audio Unit Render Proc on iOS

查看:63
本文介绍了在iOS上从Audio Unit Render Proc触发UI代码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Multichannel Mixer音频单元,可在iOS应用程序中播放音频文件,并且我需要弄清楚如何更新应用程序的UI并在渲染回调到达最长音频文件(即设置为在总线0上运行)。如下面的代码所示,我正在尝试使用KVO来实现这一点(使用布尔变量 tapesUnderway -AutoreleasePool是必需的,因为此Objective-C代码正在正常运行之外域,请参见 http:// www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html )。

I have a Multichannel Mixer audio unit playing back audio files in an iOS app, and I need to figure out how to update the app's UI and perform a reset when the render callback hits the end of the longest audio file (which is set up to run on bus 0). As my code below shows I am trying to use KVO to achieve this (using the boolean variable tapesUnderway - the AutoreleasePool is necessary as this Objective-C code is running outside of its normal domain, see http://www.cocoabuilder.com/archive/cocoa/57412-nscfnumber-no-pool-in-place-just-leaking.html).

static OSStatus tapesRenderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    SoundBufferPtr sndbuf = (SoundBufferPtr)inRefCon;

    UInt32 bufferFrames = sndbuf[inBusNumber].numFrames;
    AudioUnitSampleType *in = sndbuf[inBusNumber].data; 

    // These mBuffers are the output buffers and are empty; these two lines are just  setting the references to them (via outA and outB)
    AudioUnitSampleType *outA = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
    AudioUnitSampleType *outB = (AudioUnitSampleType *)ioData->mBuffers[1].mData;

    UInt32 sample = sndbuf[inBusNumber].sampleNum;


    // --------------------------------------------------------------
    // Set the start time here
    if(inBusNumber == 0 && !tapesFirstRenderPast)
    {
        printf("Tapes first render past\n");

        tapesStartSample = inTimeStamp->mSampleTime;
        tapesFirstRenderPast = YES;                     // MAKE SURE TO RESET THIS ON SONG RESTART
        firstPauseSample = tapesStartSample;
    }

    // --------------------------------------------------------------
    // Now process the samples
     for(UInt32 i = 0; i < inNumberFrames; ++i)
     {
         if(inBusNumber == 0)
         {
            // ------------------------------------------------------
            // Bus 0 is the backing track, and is always playing back

            outA[i] = in[sample++];
            outB[i] = in[sample++];     // For stereo set desc.SetAUCanonical to (2, true) and increment samples in both output calls

            lastSample = inTimeStamp->mSampleTime + (Float64)i;     // Set the last played sample in order to compensate for pauses


            // ------------------------------------------------------
            // Use this logic to mark end of tune
            if(sample >= (bufferFrames * 2) && !tapesEndPast)
            {
                // USE KVO TO NOTIFY METHOD OF VALUE CHANGE

                NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
                FuturesEPMedia *futuresMedia = [FuturesEPMedia sharedFuturesEPMedia];
                NSNumber *boolNo = [[NSNumber alloc] initWithBool: NO];
                [futuresMedia setValue: boolNo forKey: @"tapesUnderway"];
                [boolNo release];
                [pool release];

                tapesEndPast = YES;
            }
        }
        else
        {
            // ------------------------------------------------------
            // The other buses are the open sections, and are synched through the tapesSectionsTimes array

            Float64 sectionTime = tapesSectionTimes[inBusNumber] * kGraphSampleRate;        // Section time in samples
            Float64 currentSample = inTimeStamp->mSampleTime + (Float64)i;

            if(!isPaused && !playFirstRenderPast)
            {
                pauseGap += currentSample - firstPauseSample;
                playFirstRenderPast = YES;
                pauseFirstRenderPast = NO;
            }


            if(currentSample > (tapesStartSample + sectionTime + pauseGap) && sample < (bufferFrames * 2))
            {
                outA[i] = in[sample++];
                outB[i] = in[sample++];
            }
            else
            {
                outA[i] = 0;
                outB[i] = 0;
            }
        }
    }

   sndbuf[inBusNumber].sampleNum = sample;

   return noErr;
}

在更改此变量的时刻,它会触发self的方法,但是从此渲染回调执行时,这会导致不可接受的延迟(20-30秒)(我在想,因为它是在高优先级音频线程中运行的Objective-C代码吗?)。如何有效地触发此类更改而没有延迟? (触发器将暂停按钮更改为播放按钮,并调用重置方法以准备下一次播放。)

At the moment when this variable is changed it triggers a method in self, but this leads to an unacceptable delay (20-30 seconds) when executed from this render callback (I am thinking because it is Objective-C code running in the high priority audio thread?). How do I effectively trigger such a change without the delay? (The trigger will change a pause button to a play button and call a reset method to prepare for the next play.)

谢谢

推荐答案

是。由于其优先级较高,因此请勿在渲染线程中使用objc代码。如果将状态存储在内存(ptr或struct)中,然后在主线程中获取一个计时器以轮询(检查)内存中的值。计时器不必与渲染线程一样快,并且非常准确。

Yes. Don't use objc code in the render thread since its high priority. If you store state in memory (ptr or struct) and then get a timer in the main thread to poll (check) the value(s) in memory. The timer need not be anywhere near as fast as the render thread and will be very accurate.

这篇关于在iOS上从Audio Unit Render Proc触发UI代码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆