如何在渲染回调内交错非交错的AudioBufferList? [英] How to interleave a non-interleaved AudioBufferList inside a render callback?

查看:73
本文介绍了如何在渲染回调内交错非交错的AudioBufferList?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究一个项目,该项目涉及使用 MTAudioProcessingTap 将音频从 AVPlayer 视频播放器对象流式传输到 libpd .对于拍子的处理循环,我使用了 PdAudioUnit 的渲染回调代码作为指导;但是我最近意识到, libpd 所期望的音频格式与来自抽头的音频不同—也就是说,抽头在传入的AudioBufferList中提供了两个非交织音频数据的缓冲区,而libpd需要交错的样本.我认为我无法更改水龙头本身来提供交错的样本.

I'm working on a project that involves streaming audio from an AVPlayer video player object into libpd using an MTAudioProcessingTap. For the process loop of the tap, I used PdAudioUnits render callback code as a guide; but I realized recently that the audio format expected by libpd is not the same as the audio coming from the tap — that is, the tap is providing two buffers of non-interleaved audio data in the incoming AudioBufferList, whereas libpd expects interleaved samples. I don't think I can change the tap itself to provide interleaved samples.

有人知道我可以解决此问题的方法吗?

Does anyone know of a way I can work around this?

我认为我需要以某种方式创建一个新的 AudioBufferList 或浮点缓冲区,并在适当的地方插入样本;但我不太确定该怎么做,而且看起来很昂贵.如果有人可以给我一些指导,我将不胜感激!

I think that I need to somehow create a new AudioBufferList or float buffer and interleave the samples in place; but I'm not quite sure how to do this and it seems like it would be expensive. If anyone could give me some pointers I would greatly appreciate it!

这是我安装水龙头的代码:

Here is my code for installing my tap:

- (void)installTapWithItem:(AVPlayerItem *)playerItem {
    
    MTAudioProcessingTapCallbacks callbacks;
    
    callbacks.version = kMTAudioProcessingTapCallbacksVersion_0;
    callbacks.clientInfo = (__bridge void *)self;
    callbacks.init = tap_InitCallback;
    callbacks.finalize = tap_FinalizeCallback;
    callbacks.prepare = tap_PrepareCallback;
    callbacks.unprepare = tap_UnprepareCallback;
    callbacks.process = tap_ProcessCallback;
    
    MTAudioProcessingTapRef audioProcessingTap;
    if (noErr == MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PreEffects, &audioProcessingTap))
    {
        NSLog(@"Tap created!");
        
        AVAssetTrack *audioTrack = [playerItem.asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
        AVMutableAudioMixInputParameters* inputParams = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:audioTrack];
        inputParams.audioTapProcessor = audioProcessingTap;
        
        AVMutableAudioMix* audioMix = [AVMutableAudioMix audioMix];
        audioMix.inputParameters = @[inputParams];
        playerItem.audioMix = audioMix;
    }
}

和我的 tap_ProcessCallback :

static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut)
{
    OSStatus status = MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut);
    if (noErr != status) {
        NSLog(@"Error: MTAudioProcessingTapGetSourceAudio: %d", (int)status);
        return;
    }
    
    TapProcessorContext *context = (TapProcessorContext *)MTAudioProcessingTapGetStorage(tap);
    
    // first, create the input and output ring buffers if they haven't been created yet
    if (context->frameSize != numberFrames) {
        NSLog(@"creating ring buffers with size: %ld", (long)numberFrames);
        createRingBuffers((UInt32)numberFrames, context);
    }
    
    //adapted from PdAudioUnit.m
    float *buffer = (float *)bufferListInOut->mBuffers->mData;
    
    if (context->inputRingBuffer || context->outputRingBuffer) {
        
        // output buffer info from ioData
        UInt32 outputBufferSize = bufferListInOut->mBuffers[0].mDataByteSize;
        UInt32 outputFrames = (UInt32)numberFrames;
        //        UInt32 outputChannels = bufferListInOut->mBuffers[0].mNumberChannels;
        
        // input buffer info from ioData *after* rendering input samples
        UInt32 inputBufferSize = outputBufferSize;
        UInt32 inputFrames = (UInt32)numberFrames;
        //        UInt32 inputChannels = 0;
        
        UInt32 framesAvailable = (UInt32)rb_available_to_read(context->inputRingBuffer) / context->inputFrameSize;
        while (inputFrames + framesAvailable < outputFrames) {
            // pad input buffer to make sure we have enough blocks to fill auBuffer,
            // this should hopefully only happen when the audio unit is started
            rb_write_value_to_buffer(context->inputRingBuffer, 0, context->inputBlockSize);
            framesAvailable += context->blockFrames;
        }
        rb_write_to_buffer(context->inputRingBuffer, 1, buffer, inputBufferSize);
        
        // input ring buffer -> context -> output ring buffer
        char *copy = (char *)buffer;
        while (rb_available_to_read(context->outputRingBuffer) < outputBufferSize) {
            rb_read_from_buffer(context->inputRingBuffer, copy, context->inputBlockSize);
            [PdBase processFloatWithInputBuffer:(float *)copy outputBuffer:(float *)copy ticks:1];
            rb_write_to_buffer(context->outputRingBuffer, 1, copy, context->outputBlockSize);
        }
        
        // output ring buffer -> audio unit
        rb_read_from_buffer(context->outputRingBuffer, (char *)buffer, outputBufferSize);
    }
}

推荐答案

回答我自己的问题...

Answering my own question...

我不确定为什么会这样,但是确实可以.显然我也不需要使用环形缓冲区,这很奇怪.我还为 mNumberBuffers 仅具有一个缓冲区的情况添加了一个开关.

I'm not sure exactly why this works, but it does. Apparently I didn't need to use ring buffers either which is strange. I also added a switch for when mNumberBuffers only has one buffer.

if (context->frameSize && outputBufferSize > 0) {
    if (bufferListInOut->mNumberBuffers > 1) {
        float *left = (float *)bufferListInOut->mBuffers[0].mData;
        float *right = (float *)bufferListInOut->mBuffers[1].mData;
            
        //manually interleave channels
        for (int i = 0; i < outputBufferSize; i += 2) {
            context->interleaved[i] = left[i / 2];
            context->interleaved[i + 1] = right[i / 2];
        }
        [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:64];
        //de-interleave
        for (int i = 0; i < outputBufferSize; i += 2) {
            left[i / 2] = context->interleaved[i];
            right[i / 2] = context->interleaved[i + 1];
        }
    } else {
        context->interleaved = (float *)bufferListInOut->mBuffers[0].mData;
        [PdBase processFloatWithInputBuffer:context->interleaved outputBuffer:context->interleaved ticks:32];
    }
}

这篇关于如何在渲染回调内交错非交错的AudioBufferList?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆