从AVAudioPCMBuffer播放音频与AVAudioEngine [英] Play audio from AVAudioPCMBuffer with AVAudioEngine

查看:2303
本文介绍了从AVAudioPCMBuffer播放音频与AVAudioEngine的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个班, MicrophoneHandler AudioPlayer 。我已成功地使用 AVCaptureSession 以核定的答案<一个挖掘麦克风数据href=\"http://stackoverflow.com/questions/33850392/get-iphone-mic-data-for-streaming-over-socket/33860632?noredirect=1\">here,并与转换的 CMSampleBuffer 的NSData 使用此功能:

I have two classes, MicrophoneHandler, and AudioPlayer. I have managed to use AVCaptureSession to tap microphone data using the approved answer here, and and converted the CMSampleBuffer to NSData using this function:

func sendDataToDelegate(buffer: CMSampleBuffer!)
{
    let block = CMSampleBufferGetDataBuffer(buffer)
    var length = 0
    var data: UnsafeMutablePointer<Int8> = nil

    var status = CMBlockBufferGetDataPointer(block!, 0, nil, &length, &data)    // TODO: check for errors

    let result = NSData(bytesNoCopy: data, length: length, freeWhenDone: false)

    self.delegate.handleBuffer(result)
}

我现在想通过转换为打过来扬声器音频的NSData 以上生产到 AVAudioPCMBuffer 和娱乐它使用 AVAudioEngine 。我的 AudioPlayer 类如下:

I would now like to play the audio over the speaker by converting the NSData produced above to AVAudioPCMBuffer and play it using AVAudioEngine. My AudioPlayerclass is as follows:

var engine: AVAudioEngine!
var playerNode: AVAudioPlayerNode!
var mixer: AVAudioMixerNode!

override init()
{
    super.init()

    self.setup()
    self.start()
}

func handleBuffer(data: NSData)
{
    let newBuffer = self.toPCMBuffer(data)
    print(newBuffer)

    self.playerNode.scheduleBuffer(newBuffer, completionHandler: nil)
}

func setup()
{
    self.engine = AVAudioEngine()
    self.playerNode = AVAudioPlayerNode()

    self.engine.attachNode(self.playerNode)
    self.mixer = engine.mainMixerNode

    engine.connect(self.playerNode, to: self.mixer, format: self.mixer.outputFormatForBus(0))
}

func start()
{
    do {
        try self.engine.start()
    }
    catch {
        print("error couldn't start engine")
    }

    self.playerNode.play()
}

func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer
{
    let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 2, interleaved: false)  // given NSData audio format
    let PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.memory.mBytesPerFrame)

    PCMBuffer.frameLength = PCMBuffer.frameCapacity

    let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))

    data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length)

    return PCMBuffer
}

缓冲区达到 handleBuffer:缓冲功能时, self.delegate.handleBuffer(结果)是所谓的上述第一片段。

The buffer reaches the handleBuffer:buffer function when self.delegate.handleBuffer(result) is called in the first snippet above.

我能够打印(newBuffer),并查看转换后的缓冲区的内存位置,但没有出来的扬声器。我只能想象的东西是不转换,并从的NSData 之间是一致的。有任何想法吗?先谢谢了。

I am able to print(newBuffer), and see the memory locations of the converted buffers, but nothing comes out of the speakers. I can only imagine something is not consistent between the conversions to and from NSData. Any ideas? Thanks in advance.

推荐答案

为什么不使用 AVAudioPlayer 一路?如果你正需要的NSData ,可以随时加载从 soundURL 下面这些数据。在本实施例中,磁盘缓冲是这样的:

Skip the raw NSData format

Why not use AVAudioPlayer all the way? If you positively need NSData, you can always load such data from the soundURL below. In this example, the disk buffer is something like:

let soundURL = documentDirectory.URLByAppendingPathComponent("sound.m4a")

这是有道理的直接反正最佳的内存和资源管理记录到文件中。你得到的NSData 从你的记录是这样的:

let data = NSFileManager.defaultManager().contentsAtPath(soundURL.path())

在code以下是你所需要的:

The code below is all you need:

记录

if !audioRecorder.recording {
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setActive(true)
        audioRecorder.record()
    } catch {}
}

播放

if (!audioRecorder.recording){
    do {
        try audioPlayer = AVAudioPlayer(contentsOfURL: audioRecorder.url)
        audioPlayer.play()
    } catch {}
}

设置

let audioSession = AVAudioSession.sharedInstance()
do {
    try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
    try audioRecorder = AVAudioRecorder(URL: self.directoryURL()!,
        settings: recordSettings)
    audioRecorder.prepareToRecord()
} catch {}

设置

let recordSettings = [AVSampleRateKey : NSNumber(float: Float(44100.0)),
    AVFormatIDKey : NSNumber(int: Int32(kAudioFormatMPEG4AAC)),
    AVNumberOfChannelsKey : NSNumber(int: 1),
    AVEncoderAudioQualityKey : NSNumber(int: Int32(AVAudioQuality.Medium.rawValue))]


下载X code项目:

您可以找到这个非常例子这里。下载完整的项目,录制和播放两个模拟器和设备,从斯威夫特食谱

You can find this very example here. Download the full project, which records and plays on both simulator and device, from Swift Recipes.

这篇关于从AVAudioPCMBuffer播放音频与AVAudioEngine的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆