使用AVAssetWriter捕获视频和音频的腐败视频 [英] Corrupt video capturing audio and video using AVAssetWriter
问题描述
我正在使用AVCaptureSession
来使用视频和音频输入,并使用AVAssetWriter
编码H.264视频.
I am using an AVCaptureSession
to use video and audio input and encode an H.264 video with AVAssetWriter
.
如果我不写音频,则视频将按预期进行编码.但是,如果我编写音频,则会收到损坏的视频.
If I don't write the audio, the video is encoded as expected. But if I write the audio, I am getting a corrupt video.
如果我检查提供给AVAssetWriter
的音频CMSampleBuffer
,则会显示以下信息:
If I inspect the audio CMSampleBuffer
being supplied to the AVAssetWriter
it shows this information:
invalid = NO
dataReady = YES
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x17410ba30 [0x1b3a70bb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}
由于它提供lpcm音频,因此我已经为AVAssetWriterInput
配置了声音设置(我尝试了一个和两个声道):
Since it is supplying lpcm audio, I have configured the AVAssetWriterInput
with this setting for sound (I have tried both one and two channels):
var channelLayout = AudioChannelLayout()
memset(&channelLayout, 0, MemoryLayout<AudioChannelLayout>.size);
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Mono
let audioOutputSettings:[String: Any] = [AVFormatIDKey as String:UInt(kAudioFormatLinearPCM),
AVNumberOfChannelsKey as String:1,
AVSampleRateKey as String:44100.0,
AVLinearPCMIsBigEndianKey as String:false,
AVLinearPCMIsFloatKey as String:false,
AVLinearPCMBitDepthKey as String:16,
AVLinearPCMIsNonInterleaved as String:false,
AVChannelLayoutKey: NSData(bytes:&channelLayout, length:MemoryLayout<AudioChannelLayout>.size)]
self.assetWriterAudioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings)
self.assetWriter.add(self.assetWriterAudioInput)
使用上面的lpcm设置时,无法使用任何应用程序打开视频.我已经尝试过使用kAudioFormatMPEG4AAC
和kAudioFormatAppleLossless
,但是我仍然收到损坏的视频,但是我可以使用QuickTime Player 8(而不是QuickTime Player 7)观看视频,但是对于视频的持续时间感到困惑,没有声音播放.
When I use the lpcm setting above, I cannot open the video with any application. I have tried using kAudioFormatMPEG4AAC
and kAudioFormatAppleLossless
and I still get a corrupt video but I am able to view the video using QuickTime Player 8 (not QuickTime Player 7), but it is confused about the duration of the video and no sound is played.
录制完成后,我会打电话给
When recording is complete I am calling:
func endRecording(_ completionHandler: @escaping () -> ()) {
isRecording = false
assetWriterVideoInput.markAsFinished()
assetWriterAudioInput.markAsFinished()
assetWriter.finishWriting(completionHandler: completionHandler)
}
这是配置AVCaptureSession
的方式:
func setupCapture() {
captureSession = AVCaptureSession()
if (captureSession == nil) {
fatalError("ERROR: Couldnt create a capture session")
}
captureSession?.beginConfiguration()
captureSession?.sessionPreset = AVCaptureSessionPreset1280x720
let frontDevices = AVCaptureDevice.devices().filter{ ($0 as AnyObject).hasMediaType(AVMediaTypeVideo) && ($0 as AnyObject).position == AVCaptureDevicePosition.front }
if let captureDevice = frontDevices.first as? AVCaptureDevice {
do {
let videoDeviceInput: AVCaptureDeviceInput
do {
videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(videoDeviceInput))! else {
fatalError()
}
captureSession?.addInput(videoDeviceInput)
}
}
do {
let audioDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
let audioDeviceInput: AVCaptureDeviceInput
do {
audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice)
}
catch {
fatalError("Could not create AVCaptureDeviceInput instance with error: \(error).")
}
guard (captureSession?.canAddInput(audioDeviceInput))! else {
fatalError()
}
captureSession?.addInput(audioDeviceInput)
}
do {
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA]
dataOutput.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "com.3DTOPO.videosamplequeue")
dataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(dataOutput))! else {
fatalError()
}
captureSession?.addOutput(dataOutput)
videoConnection = dataOutput.connection(withMediaType: AVMediaTypeVideo)
}
do {
let audioDataOutput = AVCaptureAudioDataOutput()
let queue = DispatchQueue(label: "com.3DTOPO.audiosamplequeue")
audioDataOutput.setSampleBufferDelegate(self, queue: queue)
guard (captureSession?.canAddOutput(audioDataOutput))! else {
fatalError()
}
captureSession?.addOutput(audioDataOutput)
audioConnection = audioDataOutput.connection(withMediaType: AVMediaTypeAudio)
}
captureSession?.commitConfiguration()
// this will trigger capture on its own queue
captureSession?.startRunning()
}
AVCaptureVideoDataOutput
委托方法:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// func captureOutput(captureOutput: AVCaptureOutput, sampleBuffer: CMSampleBuffer, connection:AVCaptureConnection) {
var error: CVReturn
if (connection == audioConnection) {
delegate?.audioSampleUpdated(sampleBuffer: sampleBuffer)
return
}
// ... Write video buffer ...//
}
哪个电话:
func audioSampleUpdated(sampleBuffer: CMSampleBuffer) {
if (isRecording) {
while !assetWriterAudioInput.isReadyForMoreMediaData {}
if (!assetWriterAudioInput.append(sampleBuffer)) {
print("Unable to write to audio input");
}
}
}
如果我禁用上面的assetWriterAudioInput.append()
调用,则视频没有损坏,但是我当然没有音频编码.如何使视频和音频编码都能正常工作?
If I disable the assetWriterAudioInput.append()
call above, then the video isn't corrupt but of course I have no audio encoded. How can I get both video and audio encoding to work?
推荐答案
我知道了.我将assetWriter.startSession
的源时间设置为0,然后从当前的CACurrentMediaTime()
减去开始时间以写入像素数据.
I figured it out. I was setting the assetWriter.startSession
source time to 0, and then subtracting the start time from current CACurrentMediaTime()
for writing the pixel data.
我将assetWriter.startSession
的源时间更改为CACurrentMediaTime()
,并且在写入视频帧时不减去当前时间.
I changed the assetWriter.startSession
source time to the CACurrentMediaTime()
and don't subtract the current time when writing the video frame.
旧的启动会话代码:
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: kCMTimeZero)
有效的新代码:
let presentationStartTime = CMTimeMakeWithSeconds(CACurrentMediaTime(), 240)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: presentationStartTime)
这篇关于使用AVAssetWriter捕获视频和音频的腐败视频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!