带有CallKit的VOIP应用中的音频短路 [英] Short circuiting of audio in VOIP app with CallKit
问题描述
我将 SpeakerBox 应用用作我的VOIP应用程序的基础.我设法使所有工作正常进行,但似乎无法摆脱从麦克风到设备扬声器的音频短路".
I'm using the SpeakerBox app as a basis for my VOIP app. I have managed to get everything working, but I can't seem to get rid of the "short-circuiting" of the audio from the mic to the speaker of the device.
换句话说,当我打电话时,我可以在扬声器中听到自己的声音,也可以听到对方的声音.我该如何更改?
In other words, when I make a call, I can hear myself in the speaker as well as the other person's voice. How can I change this?
AVAudioSession设置:
AVAudioSession setup:
AVAudioSession *sessionInstance = [AVAudioSession sharedInstance];
NSError *error = nil;
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio category");
[sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode");
NSTimeInterval bufferDuration = .005;
[sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration");
[sessionInstance setPreferredSampleRate:44100 error:&error];
XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate");
IO单元的设置:
- (void)setupIOUnit
{
try {
// Create a new instance of Apple Voice Processing IO
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO");
// Enable input and output on Apple Voice Processing IO
// Input is enabled on the input scope of the input element
// Output is enabled on the output scope of the output element
UInt32 one = 1;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO");
// Explicitly set the input and output client formats
// sample rate = 44100, num channels = 1, format = 32 bit floating point
CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false);
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
// Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number
// of samples it will be asked to produce on any single given call to AudioUnitRender
UInt32 maxFramesPerSlice = 4096;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
// Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly
UInt32 propSize = sizeof(UInt32);
XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO");
// We need references to certain data in the render callback
// This simple struct is used to hold that information
cd.rioUnit = _rioUnit;
cd.muteAudio = &_muteAudio;
cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed;
// Set the render callback on Apple Voice Processing IO
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = NULL;
XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO");
// Initialize the Apple Voice Processing IO instance
XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance");
}
catch (CAXException &e) {
NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation);
}
catch (...) {
NSLog(@"Unknown error returned from setupIOUnit");
}
return;
}
要启动IOUnit:
NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:YES error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (TRUE) failed with error: %@", error);
OSStatus err = AudioOutputUnitStart(_rioUnit);
if (err) NSLog(@"couldn't start Apple Voice Processing IO: %d", (int)err);
return err;
要停止IOUnit
NSError *error = nil;
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (nil != error) NSLog(@"AVAudioSession set active (FALSE) failed with error: %@", error);
OSStatus err = AudioOutputUnitStop(_rioUnit);
if (err) NSLog(@"couldn't stop Apple Voice Processing IO: %d", (int)err);
return err;
我正在使用PJSIP作为我的SIP堆栈,并拥有一个Asterisk服务器.这个问题必须在客户端,因为我们也有一个基于Android的PJSIP实现,而没有这个问题.
I'm using PJSIP as my SIP stack and have a Asterisk server. The issue has to be client side, because we also have an Android-based PJSIP implementation without this issue.
推荐答案
我遇到了使用WebRTC的同一问题.我最后得出的结论是,您不应在AudioController.mm中设置IOUnit,而应将其留给PJSIP(在我的情况下为WebRTC).
I came accross the same issue using WebRTC. I finally came to the conclusion that you should not set up the IOUnit in AudioController.mm but leave it to PJSIP (WebRTC in my case).
以下是快速解决方案:
注释掉AudioController.mm的setupAudioChain
中的[self setupIOUnit];
以及ProviderDelegate.swift
的didActivate audioSession
中的startAudio()
.
A quick fix is the following:
Comment out [self setupIOUnit];
in setupAudioChain
of AudioController.mm as well as startAudio()
in didActivate audioSession
of ProviderDelegate.swift
.
这篇关于带有CallKit的VOIP应用中的音频短路的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!