将AVAudioSourceNode连接到AVAudioSinkNode不起作用 [英] Connecting AVAudioSourceNode to AVAudioSinkNode does not work

查看:94
本文介绍了将AVAudioSourceNode连接到AVAudioSinkNode不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用AVAudioEngine编写信号解释器,它将分析麦克风输入.在开发过程中,我想使用默认的输入缓冲区,这样就不必让麦克风发出噪音来测试我的更改.我正在使用Catalyst进行开发.

I am writing a signal interpreter using AVAudioEngine which will analyse microphone input. During development, I want to use a default input buffer so I don't have to make noises for the microphone to test my changes. I am developing using Catalyst.

我正在使用 AVAudioSinkNode 来获取声音缓冲区(据称性能更好)而不是使用 .installTap ).我正在使用 AVAudioSourceNode 的子类来生成正弦波.当我将这两个连接在一起时,我希望接收器节点的回调被调用,但事实并非如此.源节点的渲染块也不会被调用.

I am using AVAudioSinkNode to get the sound buffer (the performance is allegedly better than using .installTap). I am using (a subclass of) AVAudioSourceNode to generate a sine wave. When I connect these two together, I expect the sink node's callback to be called, but it is not. Neither is the source node's render block called.

let engine = AVAudioEngine()

let output = engine.outputNode
let outputFormat = output.inputFormat(forBus: 0)
let sampleRate = Float(outputFormat.sampleRate)

let sineNode440 = AVSineWaveSourceNode(
    frequency: 440,
    amplitude: 1,
    sampleRate: sampleRate
)

let sink = AVAudioSinkNode { _, frameCount, audioBufferList -> OSStatus in
    print("[SINK] + \(frameCount) \(Date().timeIntervalSince1970)")
    return noErr
}

engine.attach(sineNode440)
engine.attach(sink)
engine.connect(sineNode440, to: sink, format: nil)

try engine.start()

其他测试

如果我将 engine.inputNode 连接到接收器(即, engine.connect(engine.inputNode,到:接收器,格式:nil)),接收器回调被称为预期.

Additional tests

If I connect engine.inputNode to the sink (i.e., engine.connect(engine.inputNode, to: sink, format: nil)), the sink callback is called as expected.

sineNode440 连接到 engine.outputNode 时,我可以听到声音,并且渲染块按预期方式被调用.因此,当连接到设备输入/输出时,源和接收器都可以单独工作,但不能一起工作.

When I connect sineNode440 to engine.outputNode, I can hear the sound and the render block is called as expected. So both the source and the sink work individually when connected to device input/output, but not together.

对该问题并不重要,但相关:AVSineWaveSourceNode基于 Apple示例代码.连接到 engine.outputNode 时,此节点会产生正确的声音.

Not important to the question but relevant: AVSineWaveSourceNode is based on Apple sample code. This node produces the correct sound when connected to engine.outputNode.

class AVSineWaveSourceNode: AVAudioSourceNode {

    /// We need this separate class to be able to inject the state in the render block.
    class State {
        let amplitude: Float
        let phaseIncrement: Float
        var phase: Float = 0

        init(frequency: Float, amplitude: Float, sampleRate: Float) {
            self.amplitude = amplitude
            phaseIncrement = (2 * .pi / sampleRate) * frequency
        }
    }

    let state: State

    init(frequency: Float, amplitude: Float, sampleRate: Float) {
        let state = State(
            frequency: frequency,
            amplitude: amplitude,
            sampleRate: sampleRate
        )
        self.state = state

        let format = AVAudioFormat(standardFormatWithSampleRate: Double(sampleRate), channels: 1)!

        super.init(format: format, renderBlock: { isSilence, _, frameCount, audioBufferList -> OSStatus in
            print("[SINE GENERATION \(frequency) - \(frameCount)]")
            let tau = 2 * Float.pi
            let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
            for frame in 0..<Int(frameCount) {
                // Get signal value for this frame at time.
                let value = sin(state.phase) * amplitude
                // Advance the phase for the next frame.
                state.phase += state.phaseIncrement
                if state.phase >= tau {
                    state.phase -= tau
                }
                if state.phase < 0.0 {
                    state.phase += tau
                }
                // Set the same value on all channels (due to the inputFormat we have only 1 channel though).
                for buffer in ablPointer {
                    let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                    buf[frame] = value
                }
            }

            return noErr
        })

        for i in 0..<self.numberOfInputs {
            print("[SINEWAVE \(frequency)] BUS \(i) input format: \(self.inputFormat(forBus: i))")
        }

        for i in 0..<self.numberOfOutputs {
            print("[SINEWAVE \(frequency)] BUS \(i) output format: \(self.outputFormat(forBus: i))")
        }
    }
}

推荐答案

outputNode 在正常配置 AVAudioEngine 时(在线")会驱动音频处理图. outputNode 从其输入节点拉出音频,从其输入节点拉出音频,依此类推.当您分别将 sineNode sink 连接时其他没有连接到 outputNode 的连接,则没有任何东西连接到 sink 的输出总线或 outputNode 的输入总线,因此当硬件从 outputNode 请求音频,而无处获取.

outputNode drives the audio processing graph when AVAudioEngine is configured normally ("online"). outputNode pulls audio from its input node, which pulls audio from its input node(s), etc. When you connect sineNode and sink to each other without making a connection to outputNode, there is nothing attached to an output bus of sink or an input bus of outputNode, and therefore when the hardware asks for audio from outputNode it has nowhere to get it.

如果我理解正确,我认为您可以通过摆脱 sink ,将 sineNode 连接到 outputNode 来完成您想做的事情,并在手动渲染模式 AVAudioEngine >.在手动渲染模式下,您传递一个手动渲染块以接收音频(类似于 AVAudioSinkNode )并通过调用

If I understand correctly I think you can accomplish what you'd like to do by getting rid of sink, connecting sineNode to outputNode, and running AVAudioEngine in manual rendering mode. In manual rendering mode you pass a manual render block to receive audio (similar to AVAudioSinkNode) and drive the graph manually by calling renderOffline(_:to:).

这篇关于将AVAudioSourceNode连接到AVAudioSinkNode不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆