iPhone5上的iOS7机器人/乱码音箱模式 [英] iOS7 robotic/garbled in speaker mode on iPhone5s

查看:232
本文介绍了iPhone5上的iOS7机器人/乱码音箱模式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个VOIP应用程序,可以录制和播放音频。因此,我们使用PlayAndRecord(kAudioSessionCategory_PlayAndRecord)音频会话类别。
到目前为止,我们已成功使用iPhone 4 / 4s / 5与iOS 6和iOS 7,其中呼叫音频和音调播放清晰且可听见。
然而,对于iPhone 5s,我们发现在扬声器模式下,通话音频和音调都是机器人/乱码。使用耳机/蓝牙/耳机时,声音清晰可闻。
适用于iPhone 5s的iOS版本:7.0.4

We have a VOIP application, that records and plays audio. As such, we are using PlayAndRecord (kAudioSessionCategory_PlayAndRecord) audio session category. So far, we have used it successfully with iPhone 4/4s/5 with both iOS 6 and iOS 7 where call audio and tones played clearly and were audible. However, with iPhone 5s, we observed that both the call audio and tones sound robotic/garbled in speaker mode. When using earpiece/bluetooth/headset, sound is clear and audible. iOS Version used with iPhone 5s: 7.0.4

我们正在使用audiounit录制/播放通话音频。
在设置会话类别,音频路由,会话模式等音频属性时,我们尝试了较旧的(已弃用的)AudioSessionSetProperty()和AVAudioSession API。
对于播放音调,我们使用的是AVAudioPlayer。在VOIP呼叫期间以及在应用程序内按键盘控制器时播放音调会产生机器人声音。
使用AudioComponentInstanceNew实例化音频组件时,我们将componentSubType设置为kAudioUnitSubType_VoiceProcessingIO。
当用kAudioUnitSubType_RemoteIO替换kAudioUnitSubType_VoiceProcessingIO时,我们注意到呼叫音频和音调的声音不再是机器人,非常清楚,但使用扬声器模式时音量水平非常低。

We are using audiounits for recording/playing of call audio. When setting audio properties like session category, audio route, session mode etc., we tried both the older (deprecated) AudioSessionSetProperty() and AVAudioSession APIs. For playing tones, we are using AVAudioPlayer. Playing of tones during the VOIP call and also when pressing keypad controller within the app produces robotic sound. When instantiating the audio component using AudioComponentInstanceNew, we set componentSubType to kAudioUnitSubType_VoiceProcessingIO. When replacing kAudioUnitSubType_VoiceProcessingIO with kAudioUnitSubType_RemoteIO, we noticed that the sound of call audio and tones was no longer robotic, it was quite clear, but the volume level was very low when using speaker mode.

总之,保持所有其他音频API不变:

In summary, keeping all the other audio APIs the same:

kAudioUnitSubType_VoiceProcessingIO:音量很高(可取)但音调和通话音频是扬声器模式下的机器人。
kAudioUnitSubType_RemoteIO:音调和通话音频清晰,但听不见。

kAudioUnitSubType_VoiceProcessingIO: Volume is high (desirable) but sound of tones and call audio was robotic in speaker mode. kAudioUnitSubType_RemoteIO: Sound of tones and call audio was clear but it is not audible.

重播步骤
- 将音频会话类别设置为playAndRecord。
- 将音频路由设置为扬声器
- 设置所有其他音频属性,如启动音频单元,激活音频会话,实例化音频组件。
- 设置输入和渲染回调
- 尝试两个选项
1.使用AVAudioPlayer播放音调
2.播放呼叫音频

STEPS TO REPRODUCE - Set audio session category to playAndRecord. - Set audio route to speaker - Set all the other audio properties like starting audio unit, activating the audio session, instantiating the audio components. - Set the input and render callbacks - Try both options 1. Play tones using AVAudioPlayer 2. Play call audio

有关如何克服此问题的任何建议。引发了与Apple的问题,但尚未得到回应。

Any suggestions on how to get over this issue. Raised as an issue with Apple but no response yet from them.

我在这里分享了代码 github link

推荐答案

kAudioUnitSubType_VoiceProcessingIO 和<$之间的唯一区别c $ c> kAudioUnitSubType_RemoteIO 是语音处理包括用于调出声学回声的代码,即从扬声器中调出噪声,这样麦克风就不会拾取它。自从我使用音频框架以来已经很长时间了,但我记得听起来可能有很多东西,

The only difference between kAudioUnitSubType_VoiceProcessingIO and kAudioUnitSubType_RemoteIO is that voiceProcessing includes code to tune out acoustic echo i.e. tunes out the noise from the speaker so the microphone doesn't pick it up. Its been a long time since I've played with the audio framework but I remember that to sound off there could be any number of things,


  1. 您是否正在进行可能需要很长时间的音频回调?

  1. Are you doing any work in the audio callbacks that could be taking a long time?

回调在实时线程上运行。如果您的处理时间过长,您可能会错过数据。在固定的时间段内跟踪数据有助于查看是否捕获了所有数据。使用像WireShark这样的东西来嗅探网络。记录数据包的数量并查看手机是否捕获相同数据。

The callbacks run on realtime threads. if your processing takes too long you can miss data. Would be helpful to track the data over a fixed period of time to see are you capturing it all. Use something like wireShark to sniff the network. Record the number of packets and see did the phone capture the same.

我是有几个问题这样做,一个是使用第三方循环缓冲区,被描述为低延迟和高效...它不是。我在这里回答了我自己的问题,包括我的循环缓冲区实现,大大改善了我的音频,因为问题是我正在跳过数据。

I've had several issues doing this and one was using a third party circular buffer that was described as low latency and efficient ... it wasn't. I answered my own question here and included my circular buffer implementation that greatly improved my audio as the issue was I was skipping data.

给我一​​个让我知道:
iOS UI导致​​我的故障音频流

Give this a go and let me know: iOS UI are causing a glitch in my audio stream

请注意,这些代码中的某些代码对于音频格式ALaw是唯一的,如果您使用的话,0xD5在ALaw中是一个静音字节线性PCM或任何其他可能是某种噪音的。

Please be aware that some of this code is unique to the audio format ALaw, 0xD5 is a byte of silence in ALaw, if you are using linear PCM or any other that will probably be a noise of some kind.

这篇关于iPhone5上的iOS7机器人/乱码音箱模式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆