使用AVFoundation录制方形视频并添加水印 [英] Record square video using AVFoundation and add watermark

查看:117
本文介绍了使用AVFoundation录制方形视频并添加水印的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想要做的事情的插图



我正在尝试执行以下操作:




  • 播放音乐

  • 录制一个方形视频(我在视图中有一个容器,显示您正在录制的内容)

  • 在顶部添加一个标签,并在应用程序的图标&方形视频左下方的名称。



到目前为止,我设法播放音乐,在广场上显示AVCaptureVideoPreviewLayer容器在不同的视图中并将视频保存到相机胶卷。



事情是,我几乎找不到一些关于使用AVFoundation的模糊教程,这是我的第一个应用程序,让事情变得相当困难。



我设法做了这些事情,但我仍然不明白AVFoundation的工作原理。对于初学者来说文档是模糊的,我没有找到我特别想要的教程,并且将多个教程组合在一起(并且用Obj C编写)使得这不可能。我的问题如下:


  1. 视频不会保存为方形。 (提到应用程序不支持横向)

  2. 视频没有音频。 (我认为我应该添加除视频之外的某种音频输入)

  3. 如何在视频中添加水印?

  4. 我有一个错误:我创建了一个视图(messageView;参见代码),带有一个文本&图像让用户知道视频已保存到相机胶卷。但是,如果我第二次开始录制,视频会在视频录制时显示,而不是在录制之后。我怀疑它与命名每个视频相同。

所以我做了准备:

 覆盖func viewDidLoad(){
super.viewDidLoad()

//预设高质量
captureSession.sessionPreset = AVCaptureSessionPresetHigh

//获取能够录制视频的可用设备
让devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)为! [AVCaptureDevice]

//为设备中的设备取回相机

{
if device.position == AVCaptureDevicePosition.Back
{
currentDevice = device
}
}

//设置输入
让captureDeviceInput:AVCaptureDeviceInput
do
{
captureDeviceInput =尝试AVCaptureDeviceInput(device:currentDevice)
}
catch
{
print(error)
return
}

//设置输出
videoFileOutput = AVCaptureMovieFileOutput()

//配置会话w /输入&输出设备
captureSession.addInput(captureDeviceInput)
captureSession.addOutput(videoFileOutput)

//显示相机预览
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session:captureSession)
view.layer.addSublayer(cameraPreviewLayer!)
cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
let width = view.bounds.width * 0.85
cameraPreviewLayer?.frame = CGRectMake(0,0,width ,宽度)

//将记录按钮带到前面
view.bringSubviewToFront(recordButton)
captureSession.startRunning()

// //带来消息到前面
// view.bringSubviewToFront(messageView)
// view.bringSubviewToFront(messageText)
// view.bringSubviewToFront(messageImage)
}

然后当我按下录制按钮时:

  @IBAction func capture (发件人:AnyObject){
if!isRecording
{
isRecording = true

UIView.animateWithDuration(0.5,delay:0.0,options:[。Repeat ,. Autoreverse,.AllowUserInteraction],动画:{() - >无效
self.recordButton.transform = CGAffineTransformMakeScale(0.5,0.5)
},完成:nil)

let outputPath = NSTemporaryDirectory()+output.mov
让outputFileURL = NSURL(fileURLWithPath:outputPath)
videoFileOutput .startRecordingToOutputFileURL(outputFileURL,recordingDelegate:个体经营)?
}
,否则
{
isRecording =假

UIView.animateWithDuration(0.5,延迟:0,选项:[],动画:{() - > Void in
self.recordButton.transform = CGAffineTransformMakeScale(1.0,1.0)
},完成:nil)
recordButton.layer.removeAllAnimations()
videoFileOutput?.stopRecording()
}
}

录制视频后:

  func captureOutput (captureOutput:AVCaptureFileOutput!,didFinishRecordingToOutputFileAtURL outputFileURL:NSURL!,fromCo nnections连接:[AnyObject]!错误:NSError){
让outputPath = NSTemporaryDirectory()+ output.mov
如果UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath)
{
UISaveVideoAtPathToSavedPhotosAlbum( outputPath,self,nil,nil)
//显示成功消息
UIView.animateWithDuration(0.4,延迟:0,选项:[],动画:{
self.messageView.alpha = 0.8
},完成:nil)
UIView.animateWithDuration(0.4,延迟:0,选项:[],动画:{
self.messageText.alpha = 1.0
},完成:nil)
UIView.animateWithDuration(0.4,延迟:0,选项:[],动画:{
self.messageImage.alpha = 1.0
},完成:无)
//隐藏消息
UIView.animateWithDuration(0.4,延迟:1,选项:[],动画:{
self.messageView.alpha = 0
},完成:ni l)
UIView.animateWithDuration(0.4,延迟:1,选项:[],动画:{
self.messageText.alpha = 0
},完成:无)
UIView .animateWithDuration(0.4,延迟:1,选项:[],动画:{
self.messageImage.alpha = 0
},完成:无)
}
}

那么我需要做些什么才能解决这个问题?我一直在搜索和查看教程,但我无法弄明白...我读到了关于添加水印的内容,我发现它与在视频上添加CALayers有关。但显然我不能这样做,因为我甚至不知道如何制作视频方块并添加音频。

解决方案

一些事情:



就音频而言,你正在添加一个视频(摄像头)输入,但没有音频输入。这样做是为了获得声音。

 让audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

do {
let input = try AVCaptureDeviceInput(device:audioInputDevice)

if sourceAVFoundation.captureSession.canAddInput(input){
sourceAVFoundation.captureSession.addInput(input)
} else {
NSLog(错误:无法添加音频输入)
}
} catch let error {
NSLog(ERROR:获取输入设备:\(错误) )
}

要制作视频广场,你必须要看使用AVAssetWriter而不是AVCaptureFileOutput。这更复杂,但你会获得更多的力量。你已经创建了一个很好的AVCaptureSession来连接AssetWriter,你需要做这样的事情:

  let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory,inDomains:.UserDomainMask)
guard let documentDirectory:NSURL = urls.first else {
print(视频控制器:getAssetWriter:documentDir错误)
返回nil
}

让local_video_name = NSUUID()。UUIDString +。mp4
self.videoOutputURL = documentDirectory .URLByAppendingPathComponent(local_video_name)

guard let url = self.videoOutputURL else {
return nil
}


self.assetWriter = try ? AVAssetWriter(URL:url,fileType:AVFileTypeMPEG4)

guard let writer = self.assetWriter else {
return nil
}

// TODO:在此处设置所需的视频大小!
让videoSettings:[字符串:AnyObject] = [
AVVideoCodecKey:AVVideoCodecH264,
AVVideoWidthKey:captureSize.width,
AVVideoHeightKey:captureSize.height,
AVVideoCompressionPropertiesKey:[
AVVideoAverageBitRateKey:200000,
AVVideoProfileLevelKey:AVVideoProfileLevelH264Baseline41,
AVVideoMaxKeyFrameIntervalKey:90,
],
]

assetWriterInputCamera = AVAssetWriterInput(mediaType:AVMediaTypeVideo, outputSettings:?!videoSettings)
assetWriterInputCamera .expectsMediaDataInRealTime =真
writer.addInput(assetWriterInputCamera)

让audioSettings:[字符串:AnyObject] = [
AVFormatIDKey:NSInteger的(kAudioFormatMPEG4AAC)
AVNumberOfChannelsKey:2,
AVSampleRateKey:NSNumber的(双:44100.0)
]

assetWriterInputAudio = AVAssetWriterInput(mediaType的:甲VMediaTypeAudio,outputSettings:audioSettings)
assetWriterInputAudio?。exppectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)

一旦你设置了AssetWriter ......然后连接视频和音频的一些输出

 让bufferAudioQueue = dispatch_queue_create( 音频缓冲器委托,DISPATCH_QUEUE_SERIAL)
让audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(个体,队列:bufferAudioQueue)
captureSession.addOutput(audioOutput)

//始终添加视频...
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self,queue:bufferVideoQueue)
captureSession.addOutput(videoOutput)
如果let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo){
if connection.supportsVideoOrientation {
//强制录制到肖像
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
}

self.outputConnection = connection
}


captureSession.startRunning()

最后你需要捕获缓冲区并处理这些东西...确保你的类成为AVCaptureVideoDataOutputSampleBufferDelegate的委托AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput:AVCaptureOutput!,didOutputSampleBuffer sampleBuffer:) CMSampleBuffer!,fromConnection connection:AVCaptureConnection!){

if!self.isRecordingStarted {
return
}

if let audio = self.assetWriterInputAudio where connection.audioChannels.count> 0&& audio.readyForMoreMediaData {

dispatch_async(audioQueue!){
audio.appendSampleBuffer(sampleBuffer)
}
return
}

如果让camera = self.assetWriterInputCamera,其中camera.readyForMoreMediaData {
dispatch_async(videoQueue!){
camera.appendSampleBuffer(sampleBuffer)
}
}
}

有一些缺失的零碎,但希望这足以让你弄清楚它与文档。



最后,如果你想添加水印,有很多方法可以实时完成,但一种可能的方法是修改sampleBuffer并写然后水印成图像。你会在StackOverflow上找到其他问题。


Illustration of what I'm trying to do

I'm trying to do the following:

  • Play music
  • Record a square video ( I have a container in the view which shows what you are recording)
  • Add a label at the top and the app's icon & name in the bottom left of the square video.

Up to this point I managed to play the music, show the AVCaptureVideoPreviewLayer in a square container in a different view and save the video to the camera roll.

The thing is that I can barely find a few vague tutorials about using AVFoundation and this being my first app, makes things quite hard.

I managed to do these things, but I still don't understand how AVFoundation works. The documentation is vague for a beginner and I haven't found a tutorial for what I specifically want and putting together multiple tutorials (and written in Obj C) is making this impossible. My problems are the following:

  1. The video doesn't get saved as square. (mentioning that the app doesn't support landscape orientation)
  2. The video has no audio. (I think that I should add some sort of audio input other than the video)
  3. How to add the watermarks to the video?
  4. I have a bug: I created a view (messageView; see in code) with a text & image letting the user know that the video was saved to camera roll. But if I start recording the second time, the view appears WHILE the video is recording, not AFTER it was recorded. I suspect it's related to naming every video the same.

So I make the preparations:

override func viewDidLoad() {
        super.viewDidLoad()

        // Preset For High Quality
        captureSession.sessionPreset = AVCaptureSessionPresetHigh

        // Get available devices capable of recording video
        let devices = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo) as! [AVCaptureDevice]

        // Get back camera
        for device in devices
        {
            if device.position == AVCaptureDevicePosition.Back
            {
                currentDevice = device
            }
        }

        // Set Input
        let captureDeviceInput: AVCaptureDeviceInput
        do
        {
            captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice)
        }
        catch
        {
            print(error)
            return
        }

        // Set Output
        videoFileOutput = AVCaptureMovieFileOutput()

        // Configure Session w/ Input & Output Devices
        captureSession.addInput(captureDeviceInput)
        captureSession.addOutput(videoFileOutput)

        // Show Camera Preview
        cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        view.layer.addSublayer(cameraPreviewLayer!)
        cameraPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
        let width = view.bounds.width*0.85
        cameraPreviewLayer?.frame = CGRectMake(0, 0, width, width)

        // Bring Record Button To Front
        view.bringSubviewToFront(recordButton)
        captureSession.startRunning()

//        // Bring Message To Front
//        view.bringSubviewToFront(messageView)
//        view.bringSubviewToFront(messageText)
//        view.bringSubviewToFront(messageImage)
    }

Then when I press the record button:

@IBAction func capture(sender: AnyObject) {
    if !isRecording
    {
        isRecording = true

        UIView.animateWithDuration(0.5, delay: 0.0, options: [.Repeat, .Autoreverse, .AllowUserInteraction], animations: { () -> Void in
            self.recordButton.transform = CGAffineTransformMakeScale(0.5, 0.5)
            }, completion: nil)

        let outputPath = NSTemporaryDirectory() + "output.mov"
        let outputFileURL = NSURL(fileURLWithPath: outputPath)
        videoFileOutput?.startRecordingToOutputFileURL(outputFileURL, recordingDelegate: self)
    }
    else
    {
        isRecording = false

        UIView.animateWithDuration(0.5, delay: 0, options: [], animations: { () -> Void in
            self.recordButton.transform = CGAffineTransformMakeScale(1.0, 1.0)
            }, completion: nil)
        recordButton.layer.removeAllAnimations()
        videoFileOutput?.stopRecording()
    }
}

And after the video was recorded:

func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
    let outputPath = NSTemporaryDirectory() + "output.mov"
    if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputPath)
    {
        UISaveVideoAtPathToSavedPhotosAlbum(outputPath, self, nil, nil)
        // Show Success Message
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageView.alpha = 0.8
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageText.alpha = 1.0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 0, options: [], animations: {
            self.messageImage.alpha = 1.0
            }, completion: nil)
        // Hide Message
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageView.alpha = 0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageText.alpha = 0
            }, completion: nil)
        UIView.animateWithDuration(0.4, delay: 1, options: [], animations: {
            self.messageImage.alpha = 0
            }, completion: nil)
    }
}

So what do I need to do fix this? I kept searching and looking over tutorials but I can't figure it out... I read about adding watermarks and I saw that it has something to do with adding CALayers on top of the video. But obviously I can't do that since I don't even know how to make the video square and add audio.

解决方案

A few things:

As far as Audio goes, you're adding a Video (camera) input, but no Audio input. So do that to get sound.

    let audioInputDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

    do {
        let input = try AVCaptureDeviceInput(device: audioInputDevice)

        if sourceAVFoundation.captureSession.canAddInput(input) {
            sourceAVFoundation.captureSession.addInput(input)
        } else {
            NSLog("ERROR: Can't add audio input")
        }
    } catch let error {
        NSLog("ERROR: Getting input device: \(error)")
    }

To make the video square, you're going to have to look at using AVAssetWriter instead of AVCaptureFileOutput. This is more complex, but you get more "power". You've created an AVCaptureSession already which is great, to hook up the AssetWriter, you'll need to do something like this:

    let fileManager = NSFileManager.defaultManager()
    let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
    guard let documentDirectory: NSURL = urls.first else {
        print("Video Controller: getAssetWriter: documentDir Error")
        return nil
    }

    let local_video_name = NSUUID().UUIDString + ".mp4"
    self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)

    guard let url = self.videoOutputURL else {
        return nil
    }


    self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)

    guard let writer = self.assetWriter else {
        return nil
    }

    //TODO: Set your desired video size here! 
    let videoSettings: [String : AnyObject] = [
        AVVideoCodecKey  : AVVideoCodecH264,
        AVVideoWidthKey  : captureSize.width,
        AVVideoHeightKey : captureSize.height,
        AVVideoCompressionPropertiesKey : [
            AVVideoAverageBitRateKey : 200000,
            AVVideoProfileLevelKey : AVVideoProfileLevelH264Baseline41,
            AVVideoMaxKeyFrameIntervalKey : 90,
        ],
    ]

    assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    assetWriterInputCamera?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputCamera!)

    let audioSettings : [String : AnyObject] = [
        AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
        AVNumberOfChannelsKey : 2,
        AVSampleRateKey : NSNumber(double: 44100.0)
    ]

    assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
    assetWriterInputAudio?.expectsMediaDataInRealTime = true
    writer.addInput(assetWriterInputAudio!)

Once you have the AssetWriter setup... then hook up some outputs for the Video and Audio

    let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
    let audioOutput = AVCaptureAudioDataOutput()
    audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
    captureSession.addOutput(audioOutput)

    // Always add video last...
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
    captureSession.addOutput(videoOutput)
    if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
        if connection.supportsVideoOrientation {
            // Force recording to portrait
            connection.videoOrientation = AVCaptureVideoOrientation.Portrait
        }

        self.outputConnection = connection
    }


    captureSession.startRunning()

Finally you need to capture the buffers and process that stuff... Make sure you make your class a delegate of AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate

//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    if !self.isRecordingStarted {
        return
    }

    if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {

        dispatch_async(audioQueue!) {
            audio.appendSampleBuffer(sampleBuffer)
        }
        return
    }

    if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
        dispatch_async(videoQueue!) {
            camera.appendSampleBuffer(sampleBuffer)
        }
    }
}

There are a few missing bits and pieces, but hopefully this is enough for you to figure it out along with the documentation.

Finally, if you want to add the watermark, there are many ways this can be done in real time, but one possible way is to modify the sampleBuffer and write the watermark into the image then. You'll find other question on StackOverflow dealing with that.

这篇关于使用AVFoundation录制方形视频并添加水印的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆