使用AVAudioRecorder生成AVAudioPCMBuffer [英] Generate AVAudioPCMBuffer with AVAudioRecorder

查看:51
本文介绍了使用AVAudioRecorder生成AVAudioPCMBuffer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在iOS 10上,苹果发布了一个可识别语音的新框架.可以通过附加AVAudioPCMBuffers或为m4a提供URL来将数据传递到此框架.当前,语音识别使用前者来工作,但是这只有在某人完成后才能实现,并且不是实时的.这是该代码:

Along with iOS 10, apple released a new framework which recognizes speech. Data can be passed to this framework either by appending AVAudioPCMBuffers or giving a URL to a m4a. Currently, speech recognition works using the former but this is only possible after somebody has finished and is not in real time. Here is the code for that:

let audioSession = AVAudioSession.sharedInstance()
var audioRecorder:AVAudioRecorder!;
var soundURLGlobal:URL!;

function setUp(){
    let recordSettings = [AVSampleRateKey : NSNumber(value: Float(44100.0)),
                          AVFormatIDKey : NSNumber(value: Int32(kAudioFormatMPEG4AAC)),
                          AVNumberOfChannelsKey : NSNumber(value: 1),
                          AVEncoderAudioQualityKey : NSNumber(value: Int32(AVAudioQuality.medium.rawValue))]

    let fileManager = FileManager.default()
    let urls = fileManager.urlsForDirectory(.documentDirectory, inDomains: .userDomainMask)
    let documentDirectory = urls[0] as NSURL
    let soundURL = documentDirectory.appendingPathComponent("sound.m4a")
    soundURLGlobal=soundURL;


    do {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioRecorder = AVAudioRecorder(url: soundURL!, settings: recordSettings)
        audioRecorder.prepareToRecord()
    } catch {}
}

function start(){
    do {
        try audioSession.setActive(true)
        audioRecorder.record()
    } catch {}
}

function stop(){
    audioRecorder.stop()
    let request=SFSpeechURLRecognitionRequest(url: soundURLGlobal!)
    let recognizer=SFSpeechRecognizer();
    recognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
        if(result!.isFinal){
            print(result?.bestTranscription.formattedString)
        }
    })

}

我正在尝试将其转换,但是我找不到从哪里获得AVAudioPCMBuffer.

I am trying to convert this but I cannot find where to get a AVAudioPCMBuffer.

谢谢

推荐答案

好话题.

B人

这是解决方案的主题在Swift中使用AVAudioEngine点击麦克风输入

请参阅Wwdc 2014演讲502-实际中的AVAudioEngine捕获麦克风=>在20分钟内在21 .50

see lecture Wwdc 2014 502 - AVAudioEngine in Practice capture microphone => in 20 min create buffer with tap code => in 21 .50

这是快捷的3个代码

@IBAction func button01Pressed(_ sender: Any) {

    let inputNode = audioEngine.inputNode
    let bus = 0
    inputNode?.installTap(onBus: bus, bufferSize: 2048, format: inputNode?.inputFormat(forBus: bus)) {
        (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in

            var theLength = Int(buffer.frameLength)
            print("theLength = \(theLength)")

            var samplesAsDoubles:[Double] = []
            for i in 0 ..< Int(buffer.frameLength)
            {
                var theSample = Double((buffer.floatChannelData?.pointee[i])!)
                samplesAsDoubles.append( theSample )
            }

            print("samplesAsDoubles.count = \(samplesAsDoubles.count)")

    }

    audioEngine.prepare()
    try! audioEngine.start()

}

停止音频

func stopAudio()
    {

        let inputNode = audioEngine.inputNode
        let bus = 0
        inputNode?.removeTap(onBus: bus)
        self.audioEngine.stop()

    }

这篇关于使用AVAudioRecorder生成AVAudioPCMBuffer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆