AVAssetReader到AudioQueueBuffer [英] AVAssetReader to AudioQueueBuffer

查看:118
本文介绍了AVAssetReader到AudioQueueBuffer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我正在做一个小测试项目,以查看是否可以从AVAssetReader获取样本以在iOS上使用AudioQueue进行播放.

Currently, I'm doing a little test project to see if I can get samples from an AVAssetReader to play back using an AudioQueue on iOS.

我已阅读以下内容: (​​使用AudioQueue播放未压缩的原始声音,没有声音) 以及:(如何正确阅读使用AVAssetReader在iOS上解码的PCM样本-当前解码不正确),

I've read this: ( Play raw uncompressed sound with AudioQueue, no sound ) and this: ( How to correctly read decoded PCM samples on iOS using AVAssetReader -- currently incorrect decoding ),

两者实际上都起到了帮助作用.在阅读之前,我什么都没听到.现在,我正在获取声音,但是音频正在快速播放.这是我第一次尝试音频编程,因此深表感谢.

Which both actually did help. Before reading, I was getting no sound at all. Now, I'm getting sound, but the audio is playing SUPER fast. This is my first foray into audio programming, so any help is greatly appreciated.

我由此初始化了读取器:

I initialize the reader thusly:

NSDictionary * outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                         [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                                         [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                         [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                                         [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                         [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                         [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                                         [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,

                                         nil];

        output = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:uasset.tracks audioSettings:outputSettings];
        [reader addOutput:output];
...

然后我就抓取了数据:

CMSampleBufferRef ref= [output copyNextSampleBuffer];
    // NSLog(@"%@",ref);
    if(ref==NULL)
        return;
    //copy data to file
    //read next one
    AudioBufferList audioBufferList;
    NSMutableData *data = [NSMutableData data];
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(ref, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
    // NSLog(@"%@",blockBuffer);

    if(blockBuffer==NULL)
    {
        [data release];
        return;
    }
    if(&audioBufferList==NULL)
    {
        [data release];
        return;
    }

    //stash data in same object
    for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
    {
//        NSData* throwData;
        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        [self.delegate streamer:self didGetAudioBuffer:audioBuffer];
        /*
        Float32 *frame = (Float32*)audioBuffer.mData;
        throwData = [NSData dataWithBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
        [self.delegate streamer:self didGetAudioBuffer:throwData];
        [data appendBytes:audioBuffer.mData length:audioBuffer.mDataByteSize];
         */
    }

最终将我们带入音频队列,方法如下:

which eventually brings us to the audio queue, set up in this way:

//Apple's own code for canonical PCM
    audioDesc.mSampleRate       = 44100.0;
    audioDesc.mFormatID         = kAudioFormatLinearPCM;
    audioDesc.mFormatFlags      = kAudioFormatFlagsAudioUnitCanonical;
    audioDesc.mBytesPerPacket   = 2 * sizeof (AudioUnitSampleType);    // 8
    audioDesc.mFramesPerPacket  = 1;
    audioDesc.mBytesPerFrame    = 1 * sizeof (AudioUnitSampleType);    // 8
    audioDesc.mChannelsPerFrame = 2;
    audioDesc.mBitsPerChannel   = 8 * sizeof (AudioUnitSampleType);    // 32


err = AudioQueueNewOutput(&audioDesc, handler_OSStreamingAudio_queueOutput, self, NULL, NULL, 0, &audioQueue);
    if(err){
#pragma warning  handle error
//never errs, am using breakpoint to check
        return;
    }

我们就这样入队了

while (inNumberBytes)
        {
            size_t bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
            if (bufSpaceRemaining < inNumberBytes)
            {
                AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
        fillBuf->mAudioDataByteSize = bytesFilled;
        err = AudioQueueEnqueueBuffer(audioQueue, fillBuf, 0, NULL);
            }


                bufSpaceRemaining = kAQDefaultBufSize - bytesFilled;
                size_t copySize;
                if (bufSpaceRemaining < inNumberBytes)
                {
                    copySize = bufSpaceRemaining;
                }
                else
                {
                    copySize = inNumberBytes;
                }

                if (bytesFilled > packetBufferSize)
                {
                    return;
                }

                AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];
                memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);


                bytesFilled += copySize;
                packetsFilled = 0;
                inNumberBytes -= copySize;
                offset += copySize;
            }
        }

我试图尽可能地包容所有代码,以使每个人都容易指出我在做白痴.话虽这么说,我感觉我的问题出现在轨迹读取器的输出设置声明中或在AudioQueue的实际声明中(我在这里向队列描述了我将要发送的音频类型).事实是,我真的在数学上不知道如何实际生成这些数字(每个数据包字节,每个数据包帧,您拥有什么).对此的解释将不胜感激,并感谢您的提前帮助.

I tried to be as code inclusive as possible so as to make it easy for everyone to point out where I'm being a moron. That being said, I have a feeling my problem occurs either in the output settings declaration of the track reader or in the actual declaration of the AudioQueue (where I describe to the queue what kind of audio I'm going to be sending it). The fact of the matter is, I don't really know mathematically how to actually generate those numbers (bytes per packet, frames per packet, what have you). An explanation of that would be greatly appreciated, and thanks for the help in advance.

推荐答案

由于某些原因,即使我看到的每个使用LPCM的音频队列示例都具有

For some reason, even though every example I've seen of the audio queue using LPCM had

ASBD.mBitsPerChannel = 8* sizeof (AudioUnitSampleType);

对我来说,我确实需要

ASBD.mBitsPerChannel    = 2*bytesPerSample;

有关以下内容的描述:

ASBD.mFormatID          = kAudioFormatLinearPCM;
ASBD.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
ASBD.mBytesPerPacket    = bytesPerSample;
ASBD.mBytesPerFrame     = bytesPerSample;
ASBD.mFramesPerPacket   = 1;
ASBD.mBitsPerChannel    = 2*bytesPerSample;
ASBD.mChannelsPerFrame  = 2;           
ASBD.mSampleRate        = 48000;

我不知道为什么这行得通,这让我非常困扰……但是希望我最终能弄清楚.

I have no idea why this works, which bothers me a great deal... but hopefully I can figure it all out eventually.

如果有人可以向我解释为什么这样做,我将非常感激.

If anyone can explain to me why this works, I'd be very thankful.

这篇关于AVAssetReader到AudioQueueBuffer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆