关于CIContext,OpenGL和Metal(SWIFT)的困惑. CIContext默认使用CPU还是GPU? [英] Confusion About CIContext, OpenGL and Metal (SWIFT). Does CIContext use CPU or GPU by default?

查看:528
本文介绍了关于CIContext,OpenGL和Metal(SWIFT)的困惑. CIContext默认使用CPU还是GPU?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,我正在开发一个应用程序,其中一些主要功能围绕将CIFilter应用于图像.

let context = CIContext()
let context = CIContext(eaglContext: EAGLContext(api: .openGLES3)!)
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)

所有这些在我的CameraViewController上给我大约相同的CPU使用率(70%),在其中我将滤镜应用于帧并更新imageview.所有这些似乎都以完全相同的方式工作,这使我认为我缺少一些重要的信息.

例如,使用AVFoundation,我从相机获取每一帧,然后应用滤镜并使用新图像更新imageview.

let context = CIContext()

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    connection.videoOrientation = orientation
    connection.isVideoMirrored = !cameraModeIsBack
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)

    let sharpenFilter = CIFilter(name: "CISharpenLuminance")
    let saturationFilter = CIFilter(name: "CIColorControls")
    let contrastFilter = CIFilter(name: "CIColorControls")
    let pixellateFilter = CIFilter(name: "CIPixellate")

    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    var cameraImage = CIImage(cvImageBuffer: pixelBuffer!)

    saturationFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    saturationFilter?.setValue(saturationValue, forKey: "inputSaturation")
    var cgImage = context.createCGImage((saturationFilter?.outputImage!)!, from: cameraImage.extent)!
    cameraImage = CIImage(cgImage: cgImage)

    sharpenFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    sharpenFilter?.setValue(sharpnessValue, forKey: kCIInputSharpnessKey)
    cgImage = context.createCGImage((sharpenFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    contrastFilter?.setValue(cameraImage, forKey: "inputImage")
    contrastFilter?.setValue(contrastValue, forKey: "inputContrast")
    cgImage = context.createCGImage((contrastFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    pixellateFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(pixelateValue, forKey: kCIInputScaleKey)
    cgImage = context.createCGImage((pixellateFilter?.outputImage!)!, from: (cameraImage.extent))!
    applyChanges(image: cgImage)

}

另一个示例是我如何仅对普通图像应用更改(所有这些操作我都使用滑块)

   func imagePixelate(sliderValue: CGFloat){
    let cgImg = image?.cgImage
    let ciImg = CIImage(cgImage: cgImg!)
    let pixellateFilter = CIFilter(name: "CIPixellate")
    pixellateFilter?.setValue(ciImg, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(sliderValue, forKey: kCIInputScaleKey)
    let outputCIImg = pixellateFilter?.outputImage!
    let outputCGImg = context.createCGImage(outputCIImg!, from: (outputCIImg?.extent)!)
    let outputUIImg = UIImage(cgImage:outputCGImg!, scale:(originalImage?.scale)!, orientation: originalOrientation!)
    imageSource[0] = ImageSource(image: outputUIImg)
    slideshow.setImageInputs(imageSource)
    currentFilteredImage = outputUIImg
}

非常多:

  1. 从UiImg创建CgImg
  2. 从CgImg创建CiImg
  3. 使用上下文应用过滤器并将其转换回UiImg
  4. 使用新的UiImg更新任何视图

这在我的iPhone X上运行良好,并且在我的iPhone 6上也运行良好.由于我的应用程序非常完整,因此我希望尽可能地对其进行优化.我浏览了很多有关使用OpenGL和Metal进行处理的文档,但是似乎无法弄清楚如何开始.

我一直以为我在CPU上运行这些进程,但是使用OpenGL和Metal创建上下文并没有改善.我是否需要使用MetalKit视图或GLKit视图(eaglContext似乎已完全弃用)?我该如何翻译呢?苹果文档似乎没什么意思.

解决方案

我开始对此发表评论,但是我认为自WWDC'18以来,这是最好的答案.我会像其他人一样编辑比我评论的要多的专家,如果这样做是正确的话,我愿意删除整个答案.

您处在正确的轨道上-尽可能使用GPU,这非常合适. CoreImage和Metal,而通常"使用GPU的低级"技术,如果需要的话可以使用CPU. CoreGraphics?它使用GPU 渲染东西.

图像. UIImageCGImage是实际图像.但是CIImage不是.想到它的最好方法是图像的配方".

我通常-现在,我将稍后解释-使用过滤器时,请坚持使用CoreImage,CIFilters,CIImages和GLKViews.对CIImage使用GLKView意味着使用OpenGL以及单个CIContextEAGLContext.与使用MetalKitMTKViews一样,它提供的几乎性能都很好.

对于使用UIKit以及UIImageUIImageView,我仅在需要时执行-保存/共享/上载,无论如何.一直坚持到那时.

....

这是开始变得复杂的地方.

Metal是Apple专有的API.由于他们拥有硬件(包括CPU和GPU),因此已经对其进行了优化.它的管道"与OpenGL有所不同.没什么大不了的,只是有所不同.

直到WWDC'18使用GLKit(包括GLKView)都可以.但是OpenGL的所有东西都被贬低了,苹果公司正在把东西转移到Metal上.尽管目前的性能提升不是那么好,但是最好还是使用MTKView,Metal和CIContext`.

看看答案@matt给出了这里是使用MTKViews的好方法.

So I'm making an app where some of the main features revolve around applying CIFilters to images.

let context = CIContext()
let context = CIContext(eaglContext: EAGLContext(api: .openGLES3)!)
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)

All of these give me about the same CPU usage (70%) on my CameraViewController where I apply filters to frames and update the imageview. All of these seem to work the exact same way which makes me think I am missing some vital piece of information.

For example, using AVFoundation I get each frame from the camera apply the filters and update the imageview with the new image.

let context = CIContext()

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    connection.videoOrientation = orientation
    connection.isVideoMirrored = !cameraModeIsBack
    let videoOutput = AVCaptureVideoDataOutput()
    videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)

    let sharpenFilter = CIFilter(name: "CISharpenLuminance")
    let saturationFilter = CIFilter(name: "CIColorControls")
    let contrastFilter = CIFilter(name: "CIColorControls")
    let pixellateFilter = CIFilter(name: "CIPixellate")

    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    var cameraImage = CIImage(cvImageBuffer: pixelBuffer!)

    saturationFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    saturationFilter?.setValue(saturationValue, forKey: "inputSaturation")
    var cgImage = context.createCGImage((saturationFilter?.outputImage!)!, from: cameraImage.extent)!
    cameraImage = CIImage(cgImage: cgImage)

    sharpenFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    sharpenFilter?.setValue(sharpnessValue, forKey: kCIInputSharpnessKey)
    cgImage = context.createCGImage((sharpenFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    contrastFilter?.setValue(cameraImage, forKey: "inputImage")
    contrastFilter?.setValue(contrastValue, forKey: "inputContrast")
    cgImage = context.createCGImage((contrastFilter?.outputImage!)!, from: (cameraImage.extent))!
    cameraImage = CIImage(cgImage: cgImage)

    pixellateFilter?.setValue(cameraImage, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(pixelateValue, forKey: kCIInputScaleKey)
    cgImage = context.createCGImage((pixellateFilter?.outputImage!)!, from: (cameraImage.extent))!
    applyChanges(image: cgImage)

}

Another example is how I apply changes just to a normal image (I use sliders for all of this)

   func imagePixelate(sliderValue: CGFloat){
    let cgImg = image?.cgImage
    let ciImg = CIImage(cgImage: cgImg!)
    let pixellateFilter = CIFilter(name: "CIPixellate")
    pixellateFilter?.setValue(ciImg, forKey: kCIInputImageKey)
    pixellateFilter?.setValue(sliderValue, forKey: kCIInputScaleKey)
    let outputCIImg = pixellateFilter?.outputImage!
    let outputCGImg = context.createCGImage(outputCIImg!, from: (outputCIImg?.extent)!)
    let outputUIImg = UIImage(cgImage:outputCGImg!, scale:(originalImage?.scale)!, orientation: originalOrientation!)
    imageSource[0] = ImageSource(image: outputUIImg)
    slideshow.setImageInputs(imageSource)
    currentFilteredImage = outputUIImg
}

So pretty much:

  1. Create CgImg from UiImg
  2. Create CiImg from CgImg
  3. Use context to apply filter and translate back to UiImg
  4. Update whatever view with new UiImg

This runs well on my iPhone X and surprisingly well on my iPhone 6 as well. Since my app is pretty much complete I'm looking to optimize it as much as possible. I've looked through a lot of documentation on using OpenGL and Metal to do stuff as well but can't seem to figure out how to start.

I always thought I was running these processes on the CPU but creating the context with OpenGL and Metal provided no improvement. Do I need to be using a MetalKit view or GLKit view (eaglContext seems to be completely deprecated)? How do I translate this over? The apple documentation seems to be lacklustre.

解决方案

I started making this a comment, but I think since WWDC'18 this works best as an answer. I'll edit as others more an expert than I comment, and am willing to delete the entire answer if that's the proper thing to do.

You are on the right track - utilize the GPU when you can and it's a good fit. CoreImage and Metal, while "low-level" technologies that "usually" use the GPU, can use the CPU if that is desired. CoreGraphics? It renders things using the GPU.

Images. A UIImage and a CGImage are actual images. A CIImage however, isn't. The best way to think of it is a "recipe" for an image.

I typically - for now, I'll explain in a moment - stick to CoreImage, CIFilters, CIImages, and GLKViews when working with filters. Using a GLKView against a CIImage means using OpenGL and a single CIContext and EAGLContext. It offers almost as good performance as using MetalKit or MTKViews.

As for using UIKit and it's UIImage and UIImageView, I only do when needed - saving/sharing/uploading, whatever. Stick to the GPU until then.

....

Here's where it starts getting complicated.

Metal is an Apple proprietary API. Since they own the hardware - including the CPU and GPU - they've optimized it for them. It's "pipeline" is somewhat different than OpenGL. Nothing major, just different.

Until WWDC'18, using GLKit, including GLKView, was fine. But all things OpenGL were depricated, and Apple is moving things to Metal. While the performance gain (for now) isn't that great, you may be best off for something new to use MTKView, Metal, and CIContext`.

Look at the answer @matt gave here for a nice way to use MTKViews.

这篇关于关于CIContext,OpenGL和Metal(SWIFT)的困惑. CIContext默认使用CPU还是GPU?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆