无法使用AVCaptureAudioDataOutputSampleDelegate播放从语音录制的音频 [英] Can't play audio recorded from voice using AVCaptureAudioDataOutputSampleDelegate

查看:334
本文介绍了无法使用AVCaptureAudioDataOutputSampleDelegate播放从语音录制的音频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在谷歌搜索和研究几天,但我似乎无法让这个工作,我在互联网上找不到任何解决方案。

I have been googling and researching for days but I can't seem to get this to work and I can't find any solution to it on the internet.

我正在尝试使用麦克风捕捉我的声音,然后通过扬声器播放。

这是我的代码:

class ViewController: UIViewController, AVAudioRecorderDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {

var recordingSession: AVAudioSession!
var audioRecorder: AVAudioRecorder!
var captureSession: AVCaptureSession!
var microphone: AVCaptureDevice!
var inputDevice: AVCaptureDeviceInput!
var outputDevice: AVCaptureAudioDataOutput!

override func viewDidLoad() {
    super.viewDidLoad()

    recordingSession = AVAudioSession.sharedInstance()

    do{
        try recordingSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try recordingSession.setMode(AVAudioSessionModeVoiceChat)
        try recordingSession.setPreferredSampleRate(44000.00)
        try recordingSession.setPreferredIOBufferDuration(0.2)
        try recordingSession.setActive(true)

        recordingSession.requestRecordPermission() { [unowned self] (allowed: Bool) -> Void in
            DispatchQueue.main.async {
                if allowed {

                    do{
                        self.microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
                        try self.inputDevice = AVCaptureDeviceInput.init(device: self.microphone)

                        self.outputDevice = AVCaptureAudioDataOutput()
                        self.outputDevice.setSampleBufferDelegate(self, queue: DispatchQueue.main)

                        self.captureSession = AVCaptureSession()
                        self.captureSession.addInput(self.inputDevice)
                        self.captureSession.addOutput(self.outputDevice)
                        self.captureSession.startRunning()
                    }
                    catch let error {
                        print(error.localizedDescription)
                    }
                }
            }
        }
    }catch let error{
        print(error.localizedDescription)
    }
}

回调函数:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

    var audioBufferList = AudioBufferList(
        mNumberBuffers: 1,
        mBuffers: AudioBuffer(mNumberChannels: 0,
        mDataByteSize: 0,
        mData: nil)
    )

    var blockBuffer: CMBlockBuffer?

    var osStatus = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(

        sampleBuffer,
        nil,
        &audioBufferList,
        MemoryLayout<AudioBufferList>.size,
        nil,
        nil,
        UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
        &blockBuffer
    )

    do {
        var data: NSMutableData = NSMutableData.init()
        for i in 0..<audioBufferList.mNumberBuffers {

            var audioBuffer = AudioBuffer(
                 mNumberChannels: audioBufferList.mBuffers.mNumberChannels,
                 mDataByteSize: audioBufferList.mBuffers.mDataByteSize,
                 mData: audioBufferList.mBuffers.mData
            )

            let frame = audioBuffer.mData?.load(as: Float32.self)
            data.append(audioBuffer.mData!, length: Int(audioBuffer.mDataByteSize))

        }

        var dataFromNsData = Data.init(referencing: data)
        var avAudioPlayer: AVAudioPlayer = try AVAudioPlayer.init(data: dataFromNsData)
        avAudioPlayer.prepareToPlay()
        avAudioPlayer.play()
        }
    }
    catch let error {
        print(error.localizedDescription)
        //prints out The operation couldn’t be completed. (OSStatus error 1954115647.)
}

任何有关这方面的帮助都会很棒,它会也许可以帮助很多其他人,因为很多不完整的快速版本就在那里。

Any help with this would be amazing and it would probably help a lot of other people as well since lots of incomplete swift versions of this is out there.

谢谢。

推荐答案

你非常接近!您正在 didOutputSampleBuffer 回调中捕获音频,但这是一个高频回调,因此您创建了大量 AVAudioPlayer s并传递它们原始的LPCM数据,虽然他们只知道如何解析CoreAudio文件类型,然后他们还是超出了范围。

You were very close! You were capturing audio in the didOutputSampleBuffer callback, but that's a high frequency callback so you were creating a lot of AVAudioPlayers and passing them raw LPCM data, while they only know how to parse CoreAudio file types and then they were going out of scope anyway.

你可以很轻松地播放你的缓冲区使用 AVAudioEngine AVAudioPlayerNode ,使用 AVCaptureSession 进行捕获,但此时您也可以使用 AVAudioEngine 从麦克风录音:

You can very easily play the buffers you're capturing with AVCaptureSession using AVAudioEngine's AVAudioPlayerNode, but at that point you may as well use AVAudioEngine to record from the microphone too:

import UIKit
import AVFoundation

class ViewController: UIViewController {
    var engine = AVAudioEngine()

    override func viewDidLoad() {
        super.viewDidLoad()

        let input = engine.inputNode!
        let player = AVAudioPlayerNode()
        engine.attach(player)

        let bus = 0
        let inputFormat = input.inputFormat(forBus: bus)
        engine.connect(player, to: engine.mainMixerNode, format: inputFormat)

        input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
            player.scheduleBuffer(buffer)
        }

        try! engine.start()
        player.play()
    }
}

这篇关于无法使用AVCaptureAudioDataOutputSampleDelegate播放从语音录制的音频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆