如何将效果应用于麦克风输入?(iOS Core Audio& Audio Graph) [英] How do I apply effect to mic input?(iOS Core Audio & Audio Graph)

查看:163
本文介绍了如何将效果应用于麦克风输入?(iOS Core Audio& Audio Graph)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在iOS5上开发应用程序以获得语音和应用效果(高通过滤器,延迟等)并从扬声器输出。

I would like to develop application on iOS5 that get voice and apply effect(high pass filer, delay, and so on) to it and output it from speaker.

我尝试了RemoteIO(输入) - >效果 - > RemoteIO(输出)
但它没有用。

I tried RemoteIO(input) -> effect ->RemoteIO(output) but it didn't work.

AudioComponentDescription   cd;
cd.componentType            = kAudioUnitType_Output;
cd.componentSubType         = kAudioUnitSubType_RemoteIO;
cd.componentManufacturer    = kAudioUnitManufacturer_Apple;
cd.componentFlags           = 0;
cd.componentFlagsMask       = 0;

AUGraphAddNode(self.auGraph, &cd, &remoteIONode);
AUGraphNodeInfo(self.auGraph, remoteIONode, NULL, &remoteIOUnit);

UInt32  flag = 1;
AudioUnitSetProperty(remoteIOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));

AudioStreamBasicDescription audioFormat = [self auCanonicalASBDSampleRate:44100.0 channel:1];
AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &audioFormat, sizeof(AudioStreamBasicDescription));
AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &audioFormat, sizeof(AudioStreamBasicDescription));  


AudioComponentDescription cd_e;

cd_e.componentSubType = kAudioUnitSubType_LowPassFilter;
cd_e.componentSubType = kAudioUnitSubType_Reverb2;
cd_e.componentFlags = 0;
cd_e.componentFlagsMask = 0;
cd_e.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGraphAddNode(self.auGraph, &cd_e, &effectNode);
AUGraphNodeInfo(self.auGraph, effectNode, NULL, &effectUnit);

AudioUnitSetProperty(effectUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Global, 0, &audioFormat, sizeof(AudioStreamBasicDescription));    


 AudioUnitSetParameter(effectUnit, kAudioUnitScope_Global, 0, kLowPassParam_CutoffFrequency, 10.f, 0);
 AudioUnitSetParameter(effectUnit, kAudioUnitScope_Global, 0, kLowPassParam_Resonance, 10, 0);

AUGraphConnectNodeInput(self.auGraph, remoteIONode, 1, effectNode, 0);
AUGraphConnectNodeInput(self.auGraph, effectNode, 0, remoteIONode, 0);

AUGraphInitialize(localGraph);

但如果AUGraphConnectNodeInput设置如下,我听到了发言者的声音。

But if AUGraphConnectNodeInput set bellow , I heard my voice from speaker.

AUGraphConnectNodeInput(self.auGraph, remoteIONode, 1, remoteIONode, 0);

我该怎么做?

推荐答案

我写了教程详细介绍了如何从iOS上的麦克风进行实时处理和录制,但从那时起,我发现了 novocaine ,这是在iOS上进行效果处理的很多更简单的方式。这比处理AURemoteGraph和那些东西容易得多。

I wrote a tutorial detailing how to do realtime processing and recording from the microphone on iOS, but since then, I have discovered the joys of novocaine, which is a much easier way of doing effect processing on iOS. This is much easier than dealing with AURemoteGraph and that stuff.

这篇关于如何将效果应用于麦克风输入?(iOS Core Audio& Audio Graph)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆