适用于iOS的kAudioDevicePropertyBufferFrameSize替换 [英] kAudioDevicePropertyBufferFrameSize replacement for iOS

查看:251
本文介绍了适用于iOS的kAudioDevicePropertyBufferFrameSize替换的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图设置一个音频单元来渲染音乐(而不是音频队列..这对我来说太不透明了).. iOS没有这个属性 kAudioDevicePropertyBufferFrameSize ..任何想法如何导出这个值来设置我的IO单元的缓冲区大小?

I was trying to set up an audio unit to render the music (instead of Audio Queue.. which was too opaque for my purposes).. iOS doesn't have this property kAudioDevicePropertyBufferFrameSize.. any idea how I can derive this value to set up the buffer size of my IO unit?

我发现这个帖子很有趣..它询问使用的可能性组合 kAudioSessionProperty_CurrentHardwareIOBufferDuration kAudioSessionProperty_CurrentHardwareOutputLatency 音频会话属性来确定该值..但是没有答案..任何想法?

I found this post interesting.. it asks about the possibility of using a combination of kAudioSessionProperty_CurrentHardwareIOBufferDuration and kAudioSessionProperty_CurrentHardwareOutputLatency audio session properties to determine that value.. but there is no answer.. any ideas?

推荐答案

您可以使用 kAudioSessionProperty_CurrentHardwareIOBufferDuration 属性,该属性表示缓冲区大小很快。将此乘以您从 kAudioSessionProperty_CurrentHardwareSampleRate 获得的采样率,以获得应缓冲的样本数。

You can use the kAudioSessionProperty_CurrentHardwareIOBufferDuration property, which represents the buffer size in seconds. Multiply this by the sample rate you get from kAudioSessionProperty_CurrentHardwareSampleRate to get the number of samples you should buffer.

结果缓冲区大小应该是2的倍数。我相信512或4096是你可能得到的,但你应该总是基于从 AudioSessionGetProperty 返回的值。 。

The resulting buffer size should be a multiple of 2. I believe either 512 or 4096 are what you're likely to get, but you should always base it off of the values returned from AudioSessionGetProperty.

示例:

Float64 sampleRate;
UInt32 propSize = sizeof(Float64);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, 
                        &propSize,
                        &sampleRate);

Float32 bufferDuration;
propSize = sizeof(Float32);
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareIOBufferDuration, 
                        &propSize,
                        &bufferDuration);

UInt32 bufferLengthInFrames = sampleRate * bufferDuration;

下一步是找出您要发送音频的设备的输入流格式。根据您的描述,我假设您以编程方式生成音频以发送给扬声器。此代码假设 unit 是一个 AudioUnit 您正在发送音频,无论是RemoteIO还是类似音效的音频单位。

The next step is to find out the input stream format of the unit you're sending audio to. Based on your description, I'm assuming that you're programmatically generating audio to send to the speakers. This code assumes unit is an AudioUnit you're sending audio to, whether that's the RemoteIO or something like an effect Audio Unit.

AudioStreamBasicDescription inputASBD;
UInt32 propSize = sizeof(AudioStreamBasicDescription);
AudioUnitGetProperty(unit,
                     kAudioUnitProperty_StreamFormat,
                     kAudioUnitScope_Input,
                     0,
                     &inputASBD,
                     &propSize);

此后, inputASBD.mFormatFlags 将是对应于 unit 期待的音频流格式的位字段。两个最可能的标志集名为 kAudioFormatFlagsCanonical kAudioFormatFlagsAudioUnitCanonical 。这两个具有相应的样本类型 AudioSampleType AudioUnitSampleType ,您可以根据您的尺寸计算。

After this, inputASBD.mFormatFlags will be a bit field corresponding to the audio stream format that unit is expecting. The two most likely sets of flags are named kAudioFormatFlagsCanonical and kAudioFormatFlagsAudioUnitCanonical. These two have corresponding sample types AudioSampleType and AudioUnitSampleType that you can base your size calculation off of.

另外, AudioSampleType 通常表示来自麦克风或发往扬声器的样本,而 AudioUnitSampleType 通常用于要处理的样本(例如,通过音频单元)。目前在iOS上, AudioSampleType 是一个SInt16,而 AudioUnitSampleType 是固定在一个存储在SInt32容器中的8.24号码。 这是Core Audio邮件列表上的一篇解释此设计选择的帖子

As an aside, AudioSampleType typically represents samples coming from the mic or destined for the speakers, whereas AudioUnitSampleType is usually for samples that are intended to be processed (by an audio unit, for example). At the moment on iOS, AudioSampleType is a SInt16 and AudioUnitSampleType is fixed 8.24 number stored in a SInt32 container. Here's a post on the Core Audio mailing list explaining this design choice

我拒绝说只使用Float32,它会工作的原因是因为流的实际位表示可能会发生变化Apple感觉就像这样。

The reason I hold back from saying something like "just use Float32, it'll work" is because the actual bit representation of the stream is subject to change if Apple feels like it.

这篇关于适用于iOS的kAudioDevicePropertyBufferFrameSize替换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆