核心音频:哪有一个包=一个字节时,明确一个包= 4字节 [英] core audio: how can one packet = one byte when clearly one packet = 4 bytes

查看:251
本文介绍了核心音频:哪有一个包=一个字节时,明确一个包= 4字节的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要在核心音频转换服务在 学习核心音频 的我被这个例子中击中他们的<一个href=\"http://www.informit.com/store/learning-core-audio-a-hands-on-guide-to-audio-programming-9780321636843\"相对=nofollow>样本code :

I was going over core audio conversion services in the Learning Core Audio and I was struck by this example in their sample code:

while(1)
{
    // wrap the destination buffer in an AudioBufferList
    AudioBufferList convertedData;
    convertedData.mNumberBuffers = 1;
    convertedData.mBuffers[0].mNumberChannels = mySettings->outputFormat.mChannelsPerFrame;
    convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
    convertedData.mBuffers[0].mData = outputBuffer;

    UInt32 frameCount = packetsPerBuffer;

    // read from the extaudiofile
    CheckResult(ExtAudioFileRead(mySettings->inputFile,
                                 &frameCount,
                                 &convertedData),
                "Couldn't read from input file");

    if (frameCount == 0) {
        printf ("done reading from file");
        return;
    }

    // write the converted data to the output file
    CheckResult (AudioFileWritePackets(mySettings->outputFile,
                                       FALSE,
                                       frameCount,
                                       NULL,
                                       outputFilePacketPosition / mySettings->outputFormat.mBytesPerPacket, 
                                       &frameCount,
                                       convertedData.mBuffers[0].mData),
                 "Couldn't write packets to file");

    // advance the output file write location
    outputFilePacketPosition += (frameCount * mySettings->outputFormat.mBytesPerPacket);
}

通知如何 frameCount 定义为 packetsPerBuffer .. packetsPerBuffer 在这里被定义:

notice how frameCount is defined as packetsPerBuffer.. packetsPerBuffer is defined here:

UInt32 outputBufferSize = 32 * 1024; // 32 KB is a good starting point
UInt32 sizePerPacket = mySettings->outputFormat.mBytesPerPacket;    
UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;

这难倒我的部分是 AudioFileWritePackets 被称为..在<一个href=\"https://developer.apple.com/library/mac/#documentation/MusicAudio/Reference/AudioFileConvertRef/Reference/reference.html#//apple_ref/c/func/AudioFileWritePackets\"相对=nofollow>文档 AudioFileWritePackets第三和第五参数被定义为:

the part that stumped me is AudioFileWritePackets is called.. in the documentation AudioFileWritePackets third and fifth parameters are defined as:

inNumBytes
被写入音频数据的字节数。

ioNumPackets
在输入时,指针数据包的数量来写。在输出时,一个指向实际写入的数据包数量。

然而,在code这两个参数都给出frameCount ..这怎么可能?我知道,与PCM数据帧1 = 1包:

yet in the code both parameters are given frameCount.. how is this possible?? I know with PCM data 1 frame = 1 packet:

// define the ouput format. AudioConverter requires that one of the data formats be LPCM
audioConverterSettings.outputFormat.mSampleRate = 44100.0;
audioConverterSettings.outputFormat.mFormatID = kAudioFormatLinearPCM;
audioConverterSettings.outputFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioConverterSettings.outputFormat.mBytesPerPacket = 4;
audioConverterSettings.outputFormat.mFramesPerPacket = 1;
audioConverterSettings.outputFormat.mBytesPerFrame = 4;
audioConverterSettings.outputFormat.mChannelsPerFrame = 2;
audioConverterSettings.outputFormat.mBitsPerChannel = 16;

但同样的LPCM格式也明确指出,有每包4个字节(=每帧4字节)。

but the same lPCM formatting also clearly states that there are 4 bytes per packet (= 4 bytes per frame)..

它是如何工作的呢? (同样适用于其它的例子中使用同一章 AudioConverterFillComplexBuffer 而不是 ExtAudioFileRead ,并使用数据包,而不是帧..但它是同样的事情)

so how does this work? (the same applies to the other example in the same chapter that uses AudioConverterFillComplexBuffer instead of ExtAudioFileRead, and uses packets instead of frames.. but it's the same thing)

推荐答案

我觉得你说得对,根据在 AudioFile.h 头文件的定义, AudioFileWritePackets 应被写入作为第三个参数的音频数据的字节数,并在该学习核心音频示例的 framecount 变量被定义为数据包的数量,而不是字节数

I think you're right, according to the definition in the AudioFile.h header file, AudioFileWritePackets should take the number of bytes of audio data being written as the third parameter, and in that Learning Core Audio example the framecount variable is defined as the number of packets, not the number of bytes.

我试过的例子出来,并得到了完全相同的输出(framecount * 4) 0 ,甚至 1 AudioFileWritePackets 函数调用的第三个参数。所以对我来说它似乎是在.h文件中定义的功能并不完全正常工作(不要求第三个参数),并在那个例子中,本书的作者都没有注意到这个错误不是 - 我可能是错的。

I tried the examples out and got the exact same output with (framecount * 4), 0 and even -1 as the third parameter of the AudioFileWritePackets function call. So for me it would seem that the function doesn't work exactly as defined in the .h file (does not require the third parameter), and that in that example the authors of the book have not noticed this error either - I might be wrong though.

这篇关于核心音频:哪有一个包=一个字节时,明确一个包= 4字节的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆