使用AVAssetReader和timeRange实时读取样本 [英] reading samples with AVAssetReader and timeRange in real time

查看:997
本文介绍了使用AVAssetReader和timeRange实时读取样本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

之前我使用 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer <从完整的音频文件读取音频样本。现在我想使用范围做同样的事情(即我指定时间范围......按时间读取一小部分音频,然后再返回并再次阅读)。我想使用时间范围的原因是b / c我想控制每次读取的大小(以适应具有最大大小的数据包)。

Previously I read audio samples from a complete audio file using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. Right now I would like to do the same using ranges (ie i specify the range in time.. read a small chunk of audio as per the time, and then go back and read again). The reason why I want to use time range is b/c I want to control the size of each read (to fit in a packet with a max size).

由于某种原因,每次读取之间总是有一个冲击。在我的代码中,你会注意到我启动AVAssetReader并在每次设置时间范围时结束它,这是b / c我无法在阅读器启动后动态调整时间范围(参见在哪里了解更多详情。)

for some reason, there is always a bump between each read. In my code you'll notice that I start the AVAssetReader and end it every time I set a time range, and that's b/c I cannot dynamically adjust the time range after the reader has started (see here for more details).

开始和结束读者是否过于昂贵而难以产生持续的实时体验?或者还有其他方法可以做到这一点,我不知道吗?

Could it be that starting and ending a reader is just too expensive to produce a continuous real time experience? Or are there other ways of doing this that I'm not aware of?

还要注意,这个抖动或滞后发生在我设置时间间隔的任何时候......这让我相信以我的方式开始和结束读者实时音频播放太贵了。

Also note that this jitter or lag happens at whatever point I set the time interval to be.. which makes me believe that starting and ending a reader the way I am is too expensive for real time audio playback.

- (void) setupReader 
{
    NSURL *assetURL = [NSURL URLWithString:@"ipod-library://item/item.m4a?id=1053020204400037178"];   
    songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];

    track = [songAsset.tracks objectAtIndex:0];     
    nativeTrackASBD = [self getTrackNativeSettings:track];

    // set CM time parameters
    assetCMTime = songAsset.duration;
    CMTimeReadDurationInSeconds = CMTimeMakeWithSeconds(1, assetCMTime.timescale);
    currentCMTime = CMTimeMake(0,assetCMTime.timescale); 
}

-(void)readVBRPackets
{
    // make sure assetCMTime is greater than currentCMTime
    while (CMTimeCompare(assetCMTime,currentCMTime) == 1 )
    {
        NSError * error = nil;
        reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
        readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
                                                                  outputSettings:nil];

        [reader addOutput:readerOutput];
        reader.timeRange = CMTimeRangeMake(currentCMTime, CMTimeReadDurationInSeconds);

        [reader startReading];

        while ((sample = [readerOutput copyNextSampleBuffer])) {
            CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
            if (numSamples == 0) {
                continue;
            }

            NSLog(@"reading sample");               

            CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );                                                         
            AudioBufferList audioBufferList;  

            OSStatus err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
                                                                               sample,
                                                                               NULL,
                                                                               &audioBufferList,
                                                                               sizeof(audioBufferList),
                                                                               NULL,
                                                                               NULL,
                                                                               kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                                               &CMBuffer
                                                                                   );



            const AudioStreamPacketDescription   * inPacketDescriptions;
            size_t                               packetDescriptionsSizeOut;
            size_t inNumberPackets;

            CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample, 
                                                                         &inPacketDescriptions,
                                                                         &packetDescriptionsSizeOut),
                       "could not read sample packet descriptions");

            inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);

            AudioBuffer audioBuffer = audioBufferList.mBuffers[0];


            for (int i = 0; i < inNumberPackets; ++i)
            {

                SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
                UInt32 packetSize   = inPacketDescriptions[i].mDataByteSize;            

                size_t packetSpaceRemaining;
                packetSpaceRemaining = bufferByteSize - bytesFilled;

                // if the space remaining in the buffer is not 
                // enough for the data contained in this packet
                // then just write it
                if (packetSpaceRemaining < packetSize)
                {
                    [self enqueueBuffer];           
                }

                // copy data to the audio queue buffer
                AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
                memcpy((char*)fillBuf->mAudioData + bytesFilled, 
                       (const char*)(audioBuffer.mData + dataOffset), packetSize);                                                                

                // fill out packet description
                packetDescs[packetsFilled] = inPacketDescriptions[i];
                packetDescs[packetsFilled].mStartOffset = bytesFilled;

                bytesFilled += packetSize;
                packetsFilled += 1;

                // if this is the last packet, then ship it
                size_t packetsDescsRemaining = kAQMaxPacketDescs - packetsFilled;
                if (packetsDescsRemaining == 0) {          
                    [self enqueueBuffer];              
                }                  
            }

            CFRelease(CMBuffer);
            CMSampleBufferInvalidate(sample);
            CFRelease(sample);
        }

        [reader cancelReading];
        reader = NULL;
        readerOutput = NULL;

        currentCMTime = CMTimeAdd(currentCMTime, CMTimeReadDurationInSeconds);
    }


}


推荐答案

我知道会发生什么:-D我花了将近一整天的时间才弄明白。

I know what happens :-D It took me near a whole day to figure it out.

实际上AVAssetReader会淡化前1024个样本(你可以听到抖动效果。这就是你听到抖动效果的原因。

In fact AVAssetReader fades the first 1024 samples (maybe a little more) in. That's why you hear the jitter effect.

我通过在我真正想读的位置之前读取1024个样本来修复它,然后跳过那个1024样品。

I fixed it by reading 1024 samples before the position I really want to read, then skip that 1024 samples.

我希望它对您也有用。

这篇关于使用AVAssetReader和timeRange实时读取样本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆