从IOS发送的Android上的opus剪辑声音 [英] Clipping sound with opus on Android, sent from IOS

查看:129
本文介绍了从IOS发送的Android上的opus剪辑声音的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从audioUnit的IOS中记录音频,用opus编码字节,然后通过UDP将其发送到android端. 问题是声音在播放中被剪切.我还通过将原始数据从IOS发送到Android来测试声音,并且播放效果完美.

我的AudioSession代码是

      try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
        try audioSession.setPreferredIOBufferDuration(0.02)
        try audioSession.setActive(true)

我的录音回叫代码是:

func performRecording(
    _ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
    inTimeStamp: UnsafePointer<AudioTimeStamp>,
    inBufNumber: UInt32,
    inNumberFrames: UInt32,
    ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
 {
var err: OSStatus = noErr

err = AudioUnitRender(audioUnit!, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)

if let mData = ioData[0].mBuffers.mData {
    let ptrData = mData.bindMemory(to: Int16.self, capacity: Int(inNumberFrames))
    let bufferPtr = UnsafeBufferPointer(start: ptrData, count: Int(inNumberFrames))

    count += 1
    addedBuffer += Array(bufferPtr)

    if count == 2 {

        let _ = TPCircularBufferProduceBytes(&circularBuffer, addedBuffer, UInt32(addedBuffer.count * 2))

        count = 0
        addedBuffer = []

        let buffer = TPCircularBufferTail(&circularBuffer, &availableBytes)

        memcpy(&targetBuffer, buffer, Int(min(bytesToCopy, Int(availableBytes))))

        TPCircularBufferConsume(&circularBuffer, UInt32(min(bytesToCopy, Int(availableBytes))))

        self.audioRecordingDelegate(inTimeStamp.pointee.mSampleTime / Double(16000), targetBuffer)


    }
}
return err;
 }

在这里,我得到的 inNumberOfFrames 几乎是341,并且我将2个数组附加在一起以得到更大的Android帧大小(需要640),但是我只是在TPCircularBuffer的帮助下编码640.

func gotSomeAudio(timeStamp: Double, samples: [Int16]) {

samples.count))



    let encodedData = opusHelper?.encodeStream(of: samples)
OPUS_SET_BITRATE_REQUEST)


    let myData = encodedData!.withUnsafeBufferPointer {
        Data(buffer: $0)
    }

    var protoModel = ProtoModel()
    seqNumber += 1
    protoModel.sequenceNumber = seqNumber
    protoModel.timeStamp = Date().currentTimeInMillis()
    protoModel.payload = myData

    DispatchQueue.global().async {
        do {
            try self.tcpClient?.send(data: protoModel)
        } catch {
            print(error.localizedDescription)
        }
    }
    let diff = CFAbsoluteTimeGetCurrent() - start
                             print("Time diff is \(diff)")
}

在上面的代码中,我将编码640 frameSize并将其添加到ProtoBuf有效载荷中并通过UDP发送.

在Android方面,我正在解析Protobuf并解码640帧大小并使用AudioTrack播放它.Android方面没有问题,因为我仅使用Android即可录制和播放声音,但是当我通过IOS录制声音时出现了问题并通过Android Side播放.

请不要建议通过设置首选IO缓冲区持续时间"来增加frameSize.我想这样做而不改变它.

https://stackoverflow.com/a/57873492/12020007 这很有帮助.

https://stackoverflow.com/a/58947295/12020007 我已经根据您的建议更新了代码,删除了委托数组串联,但在Android方面仍然存在问题.我还计算了编码字节大约需要2-3毫秒的时间.

更新后的回调代码为

var err: OSStatus = noErr
        // we are calling AudioUnitRender on the input bus of AURemoteIO
        // this will store the audio data captured by the microphone in ioData
        err = AudioUnitRender(audioUnit!, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)

        if let mData = ioData[0].mBuffers.mData {

            _ = TPCircularBufferProduceBytes(&circularBuffer, mData, inNumberFrames * 2)

            print("mDataByteSize: \(ioData[0].mBuffers.mDataByteSize)")
            count += 1

            if count == 2 {

                count = 0

                let buffer = TPCircularBufferTail(&circularBuffer, &availableBytes)

                memcpy(&targetBuffer, buffer, min(bytesToCopy, Int(availableBytes)))

                TPCircularBufferConsume(&circularBuffer, UInt32(min(bytesToCopy, Int(availableBytes))))

                let encodedData = opusHelper?.encodeStream(of: targetBuffer)


                let myData = encodedData!.withUnsafeBufferPointer {
                    Data(buffer: $0)
                }

                var protoModel = ProtoModel()
                seqNumber += 1
                protoModel.sequenceNumber = seqNumber
                protoModel.timeStamp = Date().currentTimeInMillis()
                protoModel.payload = myData

                    do {
                        try self.udpClient?.send(data: protoModel)
                    } catch {
                        print(error.localizedDescription)
                    }

            }

        }
        return err;

解决方案

您的代码正在音频回调内部执行Swift内存分配(数组串联)和Swift方法调用(您的录音委托).苹果公司(在有关音频的WWDC会话中)建议不要在实时音频回调上下文中进行任何内存分配或方法调用(尤其是在请求较短的首选IO缓冲区持续时间"时).坚持使用C函数调用,例如memcpy和TPCircularBuffer.

已添加:另外,请勿丢弃样品.如果您获得680个样本,但只需要640个数据包,则保留40个剩余"样本,并在后面的数据包前面使用它们.循环缓冲区将为您保存它们.冲洗并重复.当您累积了足够的数据包后,发送从音频回调中获取的所有样本,或者当最终累积到1280(2 * 640)或更多时,再发送另一个数据包.

I am recording audio in IOS from audioUnit, encoding the bytes with opus and sending it via UDP to android side. The problem is that the sound is playing a bit clipped. I have also tested the sound by sending the Raw data from IOS to Android and it plays perfect.

My AudioSession code is

      try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: [.defaultToSpeaker])
        try audioSession.setPreferredIOBufferDuration(0.02)
        try audioSession.setActive(true)

My recording callBack code is:

func performRecording(
    _ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
    inTimeStamp: UnsafePointer<AudioTimeStamp>,
    inBufNumber: UInt32,
    inNumberFrames: UInt32,
    ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
 {
var err: OSStatus = noErr

err = AudioUnitRender(audioUnit!, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)

if let mData = ioData[0].mBuffers.mData {
    let ptrData = mData.bindMemory(to: Int16.self, capacity: Int(inNumberFrames))
    let bufferPtr = UnsafeBufferPointer(start: ptrData, count: Int(inNumberFrames))

    count += 1
    addedBuffer += Array(bufferPtr)

    if count == 2 {

        let _ = TPCircularBufferProduceBytes(&circularBuffer, addedBuffer, UInt32(addedBuffer.count * 2))

        count = 0
        addedBuffer = []

        let buffer = TPCircularBufferTail(&circularBuffer, &availableBytes)

        memcpy(&targetBuffer, buffer, Int(min(bytesToCopy, Int(availableBytes))))

        TPCircularBufferConsume(&circularBuffer, UInt32(min(bytesToCopy, Int(availableBytes))))

        self.audioRecordingDelegate(inTimeStamp.pointee.mSampleTime / Double(16000), targetBuffer)


    }
}
return err;
 }

Here i am getting inNumberOfFrames almost 341 and i am appending 2 arrays together to get a bigger framesize (needed 640) for Android but i am only encoding 640 by the help of TPCircularBuffer.

func gotSomeAudio(timeStamp: Double, samples: [Int16]) {

samples.count))



    let encodedData = opusHelper?.encodeStream(of: samples)
OPUS_SET_BITRATE_REQUEST)


    let myData = encodedData!.withUnsafeBufferPointer {
        Data(buffer: $0)
    }

    var protoModel = ProtoModel()
    seqNumber += 1
    protoModel.sequenceNumber = seqNumber
    protoModel.timeStamp = Date().currentTimeInMillis()
    protoModel.payload = myData

    DispatchQueue.global().async {
        do {
            try self.tcpClient?.send(data: protoModel)
        } catch {
            print(error.localizedDescription)
        }
    }
    let diff = CFAbsoluteTimeGetCurrent() - start
                             print("Time diff is \(diff)")
}

In the above code i am opus encoding 640 frameSize and adding it to ProtoBuf payload and Sending it via UDP.

On Android side i am parsing the Protobuf and decoding the 640 framesize and playing it with AudioTrack.There is no problem with android side as i have recorded and played sound just by using Android but the problem comes when i record sound via IOS and play through Android Side.

Please don't suggest to increase the frameSize by setting Preferred IO Buffer Duration. I want to do it without changing this.

https://stackoverflow.com/a/57873492/12020007 It was helpful.

https://stackoverflow.com/a/58947295/12020007 I have updated my code according to your suggestion, removed the delegate and array concatenation but there is still clipping on android side. I have also calculated the time it takes to encode bytes that is approx 2-3 ms.

Updated callback code is

var err: OSStatus = noErr
        // we are calling AudioUnitRender on the input bus of AURemoteIO
        // this will store the audio data captured by the microphone in ioData
        err = AudioUnitRender(audioUnit!, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData)

        if let mData = ioData[0].mBuffers.mData {

            _ = TPCircularBufferProduceBytes(&circularBuffer, mData, inNumberFrames * 2)

            print("mDataByteSize: \(ioData[0].mBuffers.mDataByteSize)")
            count += 1

            if count == 2 {

                count = 0

                let buffer = TPCircularBufferTail(&circularBuffer, &availableBytes)

                memcpy(&targetBuffer, buffer, min(bytesToCopy, Int(availableBytes)))

                TPCircularBufferConsume(&circularBuffer, UInt32(min(bytesToCopy, Int(availableBytes))))

                let encodedData = opusHelper?.encodeStream(of: targetBuffer)


                let myData = encodedData!.withUnsafeBufferPointer {
                    Data(buffer: $0)
                }

                var protoModel = ProtoModel()
                seqNumber += 1
                protoModel.sequenceNumber = seqNumber
                protoModel.timeStamp = Date().currentTimeInMillis()
                protoModel.payload = myData

                    do {
                        try self.udpClient?.send(data: protoModel)
                    } catch {
                        print(error.localizedDescription)
                    }

            }

        }
        return err;

解决方案

Your code is doing Swift memory allocation (Array concatenation) and Swift method calls (your recording delegate) inside the audio callback. Apple (in a WWDC session on Audio) recommends not doing any memory allocation or method calls inside the real-time audio callback context (especially when requesting short Preferred IO Buffer Durations). Stick to C function calls, such as memcpy and TPCircularBuffer.

Added: Also, don't discard samples. If you get 680 samples, but only need 640 for a packet, keep the 40 "left over" samples and use them appended in front of a later packet. The circular buffer will save them for you. Rinse and repeat. Send all the samples you get from the audio callback when you've accumulated enough for a packet, or yet another packet when you end up accumulating 1280 (2*640) or more.

这篇关于从IOS发送的Android上的opus剪辑声音的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆