斯威夫特:声音输出&麦克风输入|使用AudioKit | [英] Swift: Sound-Output & Microphone-Input | using AudioKit |

查看:1586
本文介绍了斯威夫特:声音输出&麦克风输入|使用AudioKit |的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用> Xcode版本9.2 <

我正在使用> AudioKit版本4.0。 4 <

I'm using >Xcode Version 9.2<
I'm using >AudioKit Version 4.0.4<

我写了一些你可以在下面找到的应该能够的代码

I've written some code you can find below that should be able to


  • 播放特定声音(频率:500.0HZ)

  • 听麦克风输入并实时计算频率

如果我正在调用 playSound() receiveSound()将一切看起来很好并且真的像我一样工作预期。但之后调用 playSound() receiveSound()?正是在那里我遇到了很大问题。

If I'm calling playSound() or receiveSound() separated everything looks fine and is really working as I expected. But calling playSound() and receiveSound() afterwards? Exactly there I got big issues.

这就是我想让代码正常工作的方式:

This is how I'd like to get the code working:

SystemClass.playSound() //play sound
DispatchQueue.main.asyncAfter(deadline: (DispatchTime.now() + 3.0)) {
   SystemClass.receiveSound() //get microphone input 3 seconds later
}







let SystemClass: System = System()
class System {
    public init() { }

    func playSound() {
        let sound = AKOscillator()
        AudioKit.output = sound
        AudioKit.start()
        sound.frequency = 500.0
        sound.amplitude = 0.5
        sound.start()
        DispatchQueue.main.asyncAfter(deadline: (DispatchTime.now() + 2.0)) {
            sound.stop()
        }
    }


    var tracker: AKFrequencyTracker!
    func receiveSound() {
        AudioKit.stop()
        AKSettings.audioInputEnabled = true
        let mic = AKMicrophone()
        tracker = AKFrequencyTracker(mic)
        let silence = AKBooster(tracker, gain: 0)
        AudioKit.output = silence
        AudioKit.start()
        Timer.scheduledTimer( timeInterval: 0.1, target: self, selector: #selector(SystemClass.outputFrequency), userInfo: nil, repeats: true)
    }

    @objc func outputFrequency() {
        print("Frequency: \(tracker.frequency)")
    }
}






<这些消息是我每次运行代码时遇到的一些编译器错误消息(调用 playSound()并调用 receiveSound() 3秒后):

AVAEInternal.h:103:_AVAE_CheckNoErr: [AVAudioEngineGraph.mm:1266:Initialize: (err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)): error -10875

AVAudioEngine.mm:149:-[AVAudioEngine prepare]: Engine@0x1c401bff0: could not initialize, error = -10875

[MediaRemote] [AVOutputContext] WARNING: AVF context unavailable for sharedSystemAudioContext

[AVAudioEngineGraph.mm:1266:Initialize: (err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)): error -10875

Fatal error: AudioKit: Could not start engine. error: Error 

Domain=com.apple.coreaudio.avfaudio Code=-10875 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInOutputChain(ThisGraph, kOutputChainOptimizedTraversal, *GetOutputNode(), isOutputChainActive)}.: file /Users/megastep/src/ak/AudioKit/AudioKit/Common/Internals/AudioKit.swift, line 243


推荐答案

我相信您的问题的狮子会分享是由于在使用它们的函数中本地声明AKNode:

I believe the lionshare of your problems are due to local declaration of AKNodes within the functions that use them:

   let sound = AKOscillator()
   let mic = AKMicrophone()        
   let silence = AKBooster(tracker, gain: 0)

将这些声明为实例变量,如上所述这里

Declare these as instance variables instead, as described here.

这篇关于斯威夫特:声音输出&amp;麦克风输入|使用AudioKit |的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆