在ObjectiveC中串联音频缓冲区 [英] Concatenating Audio Buffers in ObjectiveC

查看:81
本文介绍了在ObjectiveC中串联音频缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

首先,我是关于c和目标c的新蜜蜂

First of all I am new bee on c and objective c

我尝试找到音频缓冲区并绘制图形. 我使用音频单元回调来获取音频缓冲区.回调带来了512帧,但是在471帧之后又带来了0.(我不知道这是正常与否.它曾经带来了471帧充满了数字.但是现在不知何故,在471之后有512帧带有0.请让我知道是否是正常的)

I try to fft a buffer of audio and plot the graph of it. I use audio unit callback to get audio buffer. the callback brings 512 frames but after 471 frames it brings 0. (I dont know this is normal or not. It used to bring 471 frames with full of numbers. but now somehow 512 frames with 0 after 471. Please let me know if this is normal)

无论如何.我可以从回调中获取缓冲区,应用fft并绘制它.这很完美.这是下面的结果.只要我在每个回调中都获得缓冲,图形就会非常平滑

Anyway. I can get the buffer from the callback, apply fft and draw it . this works perfect. and here is the outcome below. the graph very smooth as long as I get buffer in each callback

但是在我的情况下,我需要3秒的缓冲区才能应用fft和draw.因此,我尝试将两个回调中的缓冲区连接起来,然后应用fft并绘制它.但是结果并不符合我的期望.尽管上面的记录在记录过程中非常平滑且精确(仅在18和19 khz上发生了幅度变化),但是当我将两个缓冲区连接在一起时,仿真器主要显示两个不同的视图,它们在它们之间进行了非常快速的交换.它们显示在下面.当然,它们基本上显示18和19 khz.但我需要精确的khz,因此我可以为自己开发的应用应用更多算法.

but in my case I need 3 second of buffer in order to apply fft and draw. so I try to concatenate the buffers from two callback and then apply fft and draw it. but the result is not like what I expect . while the above one is very smooth and precise during record( only the magnitude change on the 18 and 19 khz), when I concatenate the two buffers, the simualator display mainly two different views that swapping between them very fast. they are displayed below. Of course they basically display 18 and 19 khz. but I need precise khz so I can apply more algorithms for the app I work on.

这是我在回调函数中的代码

and here is my code in callback

//FFTInputBufferLen, FFTInputBufferFrameIndex is gloabal
//also tempFilteredBuffer is allocated in global

//by the way FFTInputBufferLen = 1024;

static OSStatus performRender (void                         *inRefCon,
                           AudioUnitRenderActionFlags   *ioActionFlags,
                           const AudioTimeStamp         *inTimeStamp,
                           UInt32                       inBusNumber,
                           UInt32                       inNumberFrames,
                           AudioBufferList              *ioData)
{
    UInt32 bus1 = 1;
    CheckError(AudioUnitRender(effectState.rioUnit,
                           ioActionFlags,
                           inTimeStamp,
                           bus1,
                           inNumberFrames,
                           ioData), "Couldn't render from RemoteIO unit");


Float32 * renderBuff = ioData->mBuffers[0].mData;

ViewController *vc = (__bridge ViewController *) inRefCon;

    // inNumberFrames comes 512 as I described above
    for (int i = 0; i < inNumberFrames ; i++)        
    {

        //I defined InputBuffers[5] in global. 
        //then added 5 Float32 InputBuffers and allocated in global

        InputBuffers[bufferCount][FFTInputBufferFrameIndex] = renderBuff[i];  
        FFTInputBufferFrameIndex ++;

        if(FFTInputBufferFrameIndex == FFTInputBufferLen)
        {
            int bufCount = bufferCount;

            dispatch_async( dispatch_get_main_queue(), ^{

                tempFilteredBuffer = [vc FilterData_rawSamples:InputBuffers[bufCount] numSamples:FFTInputBufferLen];
                [vc CalculateFFTwithPlotting_Data:tempFilteredBuffer NumberofSamples:FFTInputBufferLen ];

                free(InputBuffers[bufCount]);
                InputBuffers[bufCount] = (Float32*)malloc(sizeof(Float32) * FFTInputBufferLen);
            });

            FFTInputBufferFrameIndex = 0;
            bufferCount ++;
            if (bufferCount == 5)
            {
                bufferCount = 0;
            }
        }

    }

return noErr;
}

这是我的AudioUnit设置

here is my AudioUnit setup

- (void)setupIOUnit
{

AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;

AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of AURemoteIO");


UInt32 one = 1;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on AURemoteIO");

// I removed this in order to not getting recorded audio back on speakers! Am I right?
//CheckError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on AURemoteIO");


UInt32 maxFramesPerSlice = 4096;
CheckError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on AURemoteIO");

UInt32 propSize = sizeof(UInt32);
CheckError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on AURemoteIO");


AudioUnitElement bus1 = 1;

AudioStreamBasicDescription myASBD;

myASBD.mSampleRate = 44100;
myASBD.mChannelsPerFrame = 1;

myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mBytesPerFrame = sizeof(Float32) * myASBD.mChannelsPerFrame ;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerPacket = myASBD.mFramesPerPacket * myASBD.mBytesPerFrame;
myASBD.mBitsPerChannel = sizeof(Float32) * 8 ;
myASBD.mFormatFlags = 9 | 12 ;



 // I also remove this for not getting audio back!!

//    CheckError(AudioUnitSetProperty (_rioUnit,
//                                     kAudioUnitProperty_StreamFormat,
//                                     kAudioUnitScope_Input,
//                                     bus0,
//                                     &myASBD,
//                                     sizeof (myASBD)), "Couldn't set ASBD for RIO on input scope / bus 0");


CheckError(AudioUnitSetProperty (_rioUnit,
                                 kAudioUnitProperty_StreamFormat,
                                 kAudioUnitScope_Output,
                                 bus1,
                                 &myASBD,
                                 sizeof (myASBD)), "Couldn't set ASBD for RIO on output scope / bus 1");



effectState.rioUnit = _rioUnit;

AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)(self);
CheckError(AudioUnitSetProperty(_rioUnit,
                                kAudioUnitProperty_SetRenderCallback,
                                kAudioUnitScope_Input,
                                0,
                                &renderCallback,
                                sizeof(renderCallback)), "couldn't set render callback on AURemoteIO");

CheckError(AudioUnitInitialize(_rioUnit), "couldn't initialize AURemoteIO instance");

}

我的问题是:为什么会发生这种情况,为什么当我连接两个缓冲区时在输出上有两个主要的不同观点.还有另一种收集缓冲区并应用DSP的方法吗?我做错了什么!如果我的连接方式正确,那么我的逻辑不正确吗? (尽管我检查了很多次)

My questions are : why this happens, why there are two main different views on output when I concatenate the two buffers. is there another way to collect buffers and apply DSP? what do I do wrong! if the way I concatenate is correct, is my logic incorrect? (though I checked it many times)

在这里我想说:如何在完美状态下获得3 sn的缓冲区

Here I try to say : how can I get 3 sn of buffer in perfect condition

我真的需要帮助,最好的问候

I really need help , best Regards

推荐答案

我已经成功地对缓冲区进行了分类,而没有任何不稳定的图形.我该怎么做是将AVAudioSession类别从Record转换为PlayAndRecord.然后我注释掉了两个AudioUnitSetProperty行.然后我开始每个渲染得到470〜471帧.然后像我在发布的代码中所做的那样对它们进行提升.我也在代码中使用了缓冲区.现在可以了.但是现在它通过声音播放了.为了关闭它,我应用了下面的代码

I have successfully cancatenated the buffers without any unstable graphics. how did I do is to convert AVAudioSession category to PlayAndRecord from Record. then I have commented out the two AudioUnitSetProperty lines . then I started to get 470~471 frames per render. then I cancenated them like I did on the code I have posted. I have also used buffers in the code too. Now it works. But now it plays through the sound . In order to close it I applied the code below

for (UInt32 i=0; i<ioData->mNumberBuffers; ++i)
{
    memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}

然后我开始获得3秒的缓冲区.当我在屏幕上绘制它时,我得到了第一张图的类似视图

then I started to get 3sec of buffers. when I plot it on the screen I got a similar view of the first graph

这篇关于在ObjectiveC中串联音频缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆