如何使用Swift将过滤器应用于视频实时 [英] How to apply filter to Video real-time using Swift

查看:101
本文介绍了如何使用Swift将过滤器应用于视频实时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以将过滤器应用于 AVLayer 并将其添加到查看作为 addSublayer ?我想改变颜色,并使用Swift从相机中添加一些噪音,我不知道如何。

is possible to apply filter to AVLayer and add it to view as addSublayer? I want to change colors and add some noise to video from camera using Swift and I don't know how.

我想,可以像这样添加 filterLayer previewLayer

I thought, that is possible to add filterLayer and previewLayer like this:

self.view.layer.addSublayer(previewLayer)
self.view.layer.addSublayer(filterLayer)

这可以用我的自定义过滤器创建视频,但我想,有可能更有效地使用 AVComposition

and this can maybe create video with my custom filter, but I think, that is possible to do that more effectively usign AVComposition

所以我需要知道:


  1. 什么是最简单的方法将滤镜实时应用于相机视频输出?

  2. 是否可以合并 AVCaptureVideoPreviewLayer CALayer

  1. What is simplest way to apply filter to camera video output realtime?
  2. Is possible to merge AVCaptureVideoPreviewLayer and CALayer?

感谢您的每一个建议..

Thanks for every suggestion..

推荐答案

还有另一种选择,使用AVCaptureSession创建CIImage的实例,您可以将CIFilters应用于其中(其中有加载,从模糊到颜色校正到VFX)。

There's another alternative, use an AVCaptureSession to create instances of CIImage to which you can apply CIFilters (of which there are loads, from blurs to color correction to VFX).

这里'使用ComicBook效果的一个例子。简而言之,创建一个AVCaptureSession:

Here's an example using the ComicBook effect. In a nutshell, create an AVCaptureSession:

let captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPresetPhoto

创建一个AVCaptureDevice来代表相机,这里我正在设置后置摄像头:

Create an AVCaptureDevice to represent the camera, here I'm setting the back camera:

let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

然后创建设备的具体实现并将其附加到会话。在Swift 2中,实例化AVCaptureDeviceInput会抛出错误,所以我们需要抓住它:

Then create a concrete implementation of the device and attach it to the session. In Swift 2, instantiating AVCaptureDeviceInput can throw an error, so we need to catch that:

 do
{
    let input = try AVCaptureDeviceInput(device: backCamera)

    captureSession.addInput(input)
}
catch
{
    print("can't access camera")
    return
}

现在,这里有点' gotcha':虽然我们实际上并没有使用AVCaptureVideoPreviewLayer,但需要让样本委托工作,所以我们创建其中一个:

Now, here's a little 'gotcha': although we don't actually use an AVCaptureVideoPreviewLayer but it's required to get the sample delegate working, so we create one of those:

// although we don't use this, it's required to get captureOutput invoked
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

view.layer.addSublayer(previewLayer)

接下来,我们创建一个视频输出,AVCaptureVideoDataOutput,我们将用它来访问视频源:

Next, we create a video output, AVCaptureVideoDataOutput which we'll use to access the video feed:

let videoOutput = AVCaptureVideoDataOutput()

确保自我实现AVCaptureVideoDataOutputSampleBufferDelegate,我们可以在视频输出上设置样本缓冲区委托:

Ensuring that self implements AVCaptureVideoDataOutputSampleBufferDelegate, we can set the sample buffer delegate on the video output:

 videoOutput.setSampleBufferDelegate(self, 
    queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))

视频输出随后附加到捕获会话:

The video output is then attached to the capture session:

 captureSession.addOutput(videoOutput)

...最后,我们开始捕获会话:

...and, finally, we start the capture session:

captureSession.startRunning()

因为我们已经设置了委托,所以每次帧捕获都会调用captureOutput。 captureOutput传递一个类型为CMSampleBuffer的示例缓冲区,它只需要两行代码就可以将该数据转换为CIImage以供Core Image处理:

Because we've set the delegate, captureOutput will be invoked with each frame capture. captureOutput is passed a sample buffer of type CMSampleBuffer and it just takes two lines of code to convert that data to a CIImage for Core Image to handle:

let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)

...并且该图像数据被传递到我们的漫画书效果,而漫画书效果又用于填充图像视图:

...and that image data is passed to our Comic Book effect which, in turn, is used to populate an image view:

let comicEffect = CIFilter(name: "CIComicEffect")

comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)

let filteredImage = UIImage(CIImage: comicEffect!.valueForKey(kCIOutputImageKey) as! CIImage!)

dispatch_async(dispatch_get_main_queue())
{
    self.imageView.image = filteredImage
}

我有我的GitHub中的这个项目的源代码可以回复她e

这篇关于如何使用Swift将过滤器应用于视频实时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆