如何在iOS中将AAC音频缓冲区解码为PCM缓冲区? [英] How to decode AAC audio buffer to PCM buffer in iOS?

查看:1109
本文介绍了如何在iOS中将AAC音频缓冲区解码为PCM缓冲区?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将AAC音频解码为iOS中的PCM音频,最好的方法是什么?任何示例代码都非常有用......有没有简单的API来做这个...?

I am trying to decode AAC audio to PCM audio in iOS, what the best way to do this?Any sample code would be very helpful...Is there any simple APIs to do this..?

推荐答案

我有示例代码来执行此操作。

I have sample code to do it.

在开始时,您应该在/中配置out ASBD(AudioStreamBasicDescription)并创建转换器:

At start you should configure in/out ASBD (AudioStreamBasicDescription) and create converter:

- (void)setupAudioConverter{
    AudioStreamBasicDescription outFormat;
    memset(&outFormat, 0, sizeof(outFormat));
    outFormat.mSampleRate       = 44100;
    outFormat.mFormatID         = kAudioFormatLinearPCM;
    outFormat.mFormatFlags      = kLinearPCMFormatFlagIsSignedInteger;
    outFormat.mBytesPerPacket   = 2;
    outFormat.mFramesPerPacket  = 1;
    outFormat.mBytesPerFrame    = 2;
    outFormat.mChannelsPerFrame = 1;
    outFormat.mBitsPerChannel   = 16;
    outFormat.mReserved         = 0;


    AudioStreamBasicDescription inFormat;
    memset(&inFormat, 0, sizeof(inFormat));
    inFormat.mSampleRate        = 44100;
    inFormat.mFormatID          = kAudioFormatMPEG4AAC;
    inFormat.mFormatFlags       = kMPEG4Object_AAC_LC;
    inFormat.mBytesPerPacket    = 0;
    inFormat.mFramesPerPacket   = 1024;
    inFormat.mBytesPerFrame     = 0;
    inFormat.mChannelsPerFrame  = 1;
    inFormat.mBitsPerChannel    = 0;
    inFormat.mReserved          = 0;

    OSStatus status =  AudioConverterNew(&inFormat, &outFormat, &_audioConverter);
    if (status != 0) {
        printf("setup converter error, status: %i\n", (int)status);
    }
}

之后你应该为音频转换器做回调函数:

After that you should make callback function for audio converter:

struct PassthroughUserData {
    UInt32 mChannels;
    UInt32 mDataSize;
    const void* mData;
    AudioStreamPacketDescription mPacket;
};


OSStatus inInputDataProc(AudioConverterRef aAudioConverter,
                         UInt32* aNumDataPackets /* in/out */,
                         AudioBufferList* aData /* in/out */,
                         AudioStreamPacketDescription** aPacketDesc,
                         void* aUserData)
{

    PassthroughUserData* userData = (PassthroughUserData*)aUserData;
    if (!userData->mDataSize) {
        *aNumDataPackets = 0;
        return kNoMoreDataErr;
    }

    if (aPacketDesc) {
        userData->mPacket.mStartOffset = 0;
        userData->mPacket.mVariableFramesInPacket = 0;
        userData->mPacket.mDataByteSize = userData->mDataSize;
        *aPacketDesc = &userData->mPacket;
    }

    aData->mBuffers[0].mNumberChannels = userData->mChannels;
    aData->mBuffers[0].mDataByteSize = userData->mDataSize;
    aData->mBuffers[0].mData = const_cast<void*>(userData->mData);

    // No more data to provide following this run.
    userData->mDataSize = 0;

    return noErr;
}

帧解码方法:

- (void)decodeAudioFrame:(NSData *)frame withPts:(NSInteger)pts{
    if(!_audioConverter){
        [self setupAudioConverter];
    }

    PassthroughUserData userData = { 1, (UInt32)frame.length, [frame bytes]};
    NSMutableData *decodedData = [NSMutableData new];

    const uint32_t MAX_AUDIO_FRAMES = 128;
    const uint32_t maxDecodedSamples = MAX_AUDIO_FRAMES * 1;

    do{
        uint8_t *buffer = (uint8_t *)malloc(maxDecodedSamples * sizeof(short int));
        AudioBufferList decBuffer;
        decBuffer.mNumberBuffers = 1;
        decBuffer.mBuffers[0].mNumberChannels = 1;
        decBuffer.mBuffers[0].mDataByteSize = maxDecodedSamples * sizeof(short int);
        decBuffer.mBuffers[0].mData = buffer;

        UInt32 numFrames = MAX_AUDIO_FRAMES;

        AudioStreamPacketDescription outPacketDescription;
        memset(&outPacketDescription, 0, sizeof(AudioStreamPacketDescription));
        outPacketDescription.mDataByteSize = MAX_AUDIO_FRAMES;
        outPacketDescription.mStartOffset = 0;
        outPacketDescription.mVariableFramesInPacket = 0;

        OSStatus rv = AudioConverterFillComplexBuffer(_audioConverter,
                                                      inInputDataProc,
                                                      &userData,
                                                      &numFrames /* in/out */,
                                                      &decBuffer,
                                                      &outPacketDescription);

        if (rv && rv != kNoMoreDataErr) {
            NSLog(@"Error decoding audio stream: %d\n", rv);
            break;
        }

        if (numFrames) {
            [decodedData appendBytes:decBuffer.mBuffers[0].mData length:decBuffer.mBuffers[0].mDataByteSize];
        }

        if (rv == kNoMoreDataErr) {
            break;
        }

    }while (true);

    //void *pData = (void *)[decodedData bytes];
    //audioRenderer->Render(&pData, decodedData.length, pts);
}

这篇关于如何在iOS中将AAC音频缓冲区解码为PCM缓冲区?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆