翻转相机时无缝录音,使用 AVCaptureSession &AVAssetWriter [英] Seamless audio recording while flipping camera, using AVCaptureSession & AVAssetWriter

查看:17
本文介绍了翻转相机时无缝录音,使用 AVCaptureSession &AVAssetWriter的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找一种在前后摄像头之间切换时保持无缝音轨的方法.市场上的许多应用程序都可以做到这一点,例如 SnapChat……

I’m looking for a way to maintain a seamless audio track while flipping between front and back camera. Many apps in the market can do this, one example is SnapChat…

解决方案应使用 AVCaptureSession 和 AVAssetWriter.此外,它不应该明确使用 AVMutableComposition,因为有一个 bug 在 AVMutableComposition 和 AVCaptureSession ATM 之间.另外,我负担不起后期处理时间.

Solutions should use AVCaptureSession and AVAssetWriter. Also it should explicitly not use AVMutableComposition since there is a bug between AVMutableComposition and AVCaptureSession ATM. Also, I can't afford post processing time.

目前,当我更改视频输入时,录音会跳过并变得不同步.

Currently when I change the video input the audio recording skips and becomes out of sync.

我将包含可能相关的代码.

I’m including the code that could be relevant.

翻转相机

-(void) updateCameraDirection:(CamDirection)vCameraDirection {
    if(session) {
        AVCaptureDeviceInput* currentInput;
        AVCaptureDeviceInput* newInput;
        BOOL videoMirrored = NO;
        switch (vCameraDirection) {
            case CamDirection_Front:
                currentInput = input_Back;
                newInput = input_Front;
                videoMirrored = NO;
                break;
            case CamDirection_Back:
                currentInput = input_Front;
                newInput = input_Back;
                videoMirrored = YES;
                break;
            default:
                break;
        }

        [session beginConfiguration];
        //disconnect old input
        [session removeInput:currentInput];
        //connect new input
        [session addInput:newInput];
        //get new data connection and config
        dataOutputVideoConnection = [dataOutputVideo connectionWithMediaType:AVMediaTypeVideo];
        dataOutputVideoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
        dataOutputVideoConnection.videoMirrored = videoMirrored;
        //finish
        [session commitConfiguration];
    }
}

样本缓冲区

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    //not active
    if(!recordingVideo)
        return;

    //start session if not started
    if(!startedSession) {
        startedSession = YES;
        [assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
    }

    //Process sample buffers
    if (connection == dataOutputAudioConnection) {
        if([assetWriterInputAudio isReadyForMoreMediaData]) {
            BOOL success = [assetWriterInputAudio appendSampleBuffer:sampleBuffer];
            //…
        }

    } else if (connection == dataOutputVideoConnection) {
        if([assetWriterInputVideo isReadyForMoreMediaData]) {        
            BOOL success = [assetWriterInputVideo appendSampleBuffer:sampleBuffer];
            //…
        }
    }
}

也许调整音频采样时间戳?

Perhaps adjust audio sample timeStamp?

推荐答案

嘿,我遇到了同样的问题,发现在切换相机后下一帧被推得太远了.之后这似乎改变了每一帧,从而导致视频和音频不同步.我的解决方案是在切换相机后将每个错位的帧移到正确的位置.

Hey I was facing the same issue and discovered that after switching cameras the next frame was pushed far out of place. This seemed to shift every frame after that thus causing the the video and audio to be out of sync. My solution was to shift every misplaced frame to it's correct position after switching cameras.

抱歉,我的答案将在 Swift 4.2 中

您必须使用 AVAssetWriterInputPixelBufferAdaptor 才能将示例缓冲区附加到指定的演示时间戳.

You'll have to use AVAssetWriterInputPixelBufferAdaptor in order to append the sample buffers at a specify presentation timestamp.

previousPresentationTimeStamp 是前一帧的演示时间戳,currentPresentationTimestamp 是你猜到的当前帧的演示时间戳.maxFrameDistance 在测试时运行良好,但您可以根据自己的喜好进行更改.

previousPresentationTimeStamp is the presentation timestamp of the previous frame and currentPresentationTimestamp is as you guessed the presentation timestamp of the current. maxFrameDistance worked every well when testing but you can change this to your liking.

let currentFramePosition = (Double(self.frameRate) * Double(currentPresentationTimestamp.value)) / Double(currentPresentationTimestamp.timescale)
let previousFramePosition = (Double(self.frameRate) * Double(previousPresentationTimeStamp.value)) / Double(previousPresentationTimeStamp.timescale)
var presentationTimeStamp = currentPresentationTimestamp
let maxFrameDistance = 1.1
let frameDistance = currentFramePosition - previousFramePosition
if frameDistance > maxFrameDistance {
    let expectedFramePosition = previousFramePosition + 1.0
    //print("[mwCamera]: Frame at incorrect position moving from (currentFramePosition) to (expectedFramePosition)")

    let newFramePosition = ((expectedFramePosition) * Double(currentPresentationTimestamp.timescale)) / Double(self.frameRate)

    let newPresentationTimeStamp = CMTime.init(value: CMTimeValue(newFramePosition), timescale: currentPresentationTimestamp.timescale)

    presentationTimeStamp = newPresentationTimeStamp
}

let success = assetWriterInputPixelBufferAdator.append(pixelBuffer, withPresentationTime: presentationTimeStamp)
if !success, let error = assetWriter.error {
    fatalError(error.localizedDescription)
}

另外请注意 - 这很有效,因为我保持帧速率一致,因此请确保您在整个过程中完全控制捕获设备的帧速率.

Also please note - This worked because I kept the frame rate consistent, so make sure that you have total control of the capture device's frame rate throughout this process.

我在这里有一个使用这个逻辑的仓库

这篇关于翻转相机时无缝录音,使用 AVCaptureSession &AVAssetWriter的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆