核心图像GPU性能太慢 [英] Core Image GPU performance too slow

查看:70
本文介绍了核心图像GPU性能太慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在玩Core Image Filters,遇到了一个奇怪的基准。
具有以下两个功能;顾名思义,一个在cpu上处理大量数学运算,另一个在gpu上处理数学运算,cpu性能比gpu性能快大约一百倍。我尝试使用 CILineOverlay和 CIPhotoEffectProcess滤镜,并使用 DispatchTime.now()方法测量了转换时间。
我做错什么了吗?

I was playing with Core Image Filters and encountered a strange benchmark. With the following 2 functions; one processing heavy math on cpu and other on gpu as the name suggests, cpu performance is about a hundred times faster than the gpu performance. I tried "CILineOverlay" and "CIPhotoEffectProcess" filters and measured the transforming time with DispatchTime.now() method. Am I doing something wrong? Or is it related to deprecated opengl support?

private func apply_cpu(to image:UIImage?, appleFilterName:String) -> UIImage?    {

    guard let image = image, let cgimg = image.cgImage else {
        return nil
    }

    let coreImage = CIImage(cgImage: cgimg)

    let filter = CIFilter(name: "CISepiaTone")
    filter?.setValue(coreImage, forKey: kCIInputImageKey)
    filter?.setValue(0.5, forKey: kCIInputIntensityKey)

    if let output = filter?.value(forKey: kCIOutputImageKey) as? CIImage {
        return UIImage(ciImage: output)
    }

    else {
        return nil
    }
}


private func apply_gpu(to image:UIImage?, appleFilterName:String)-> UIImage?  {

    guard let image = image, let cgimg = image.cgImage else {

        return nil
    }


    let coreImage = CIImage(cgImage: cgimg)


    let start = DispatchTime.now()

    let openGLContext = EAGLContext(api: .openGLES3)
    let context = CIContext(eaglContext: openGLContext!) 


    guard let filter = CIFilter(name: appleFilterName)   else {
        return nil
    }

    if  filter.inputKeys.contains(kCIInputImageKey) {
        filter.setValue(coreImage, forKey: kCIInputImageKey)
    }

    if  filter.inputKeys.contains(kCIInputIntensityKey) {


    }


    if let output = filter.value(forKey: kCIOutputImageKey) as? CIImage {
        let cgimgresult = context.createCGImage(output, from: output.extent)
        return UIImage(cgImage: cgimgresult!)

    }

        return nil

}

}

推荐答案

在评论中,问题在于进行性能时间测试的位置。在测试 CoreImage 过滤器时,我对此压力不足:

From the comments, the issue was where the performance time tests were being done. I can't stress this enough when testing CoreImage filters:

使用 real 设备,而不是模拟器。

Use a real device, not the simulator.

我的经验是,在任何iPhone 5或更高版本的模拟器中,它可能需要几秒钟到几分钟使用iOS 9+的设备(两种方式都可能更早)将接近实时到毫秒。如果您没有在真实设备上看到此内容?代码中有 错误。

My experience is that it can take "seconds to minutes" in the simulator where in any iPhone 5 or later device using iOS 9+ (maybe earlier too, both ways) will be "near real-time to milliseconds". If you aren't seeing this on a real device? There is something wrong in the code.

我还没有找到任何强调这一点的教程,书籍或任何东西。我最好的资源-Simon Gladman撰写了出色的 Swift核心映像(请注意,它是Swift 2)-解释了我相信正在发生的很多事情,但从未真正强调过为什么会这样

I've not found any tutorials, any books, anything at all that stresses this single point. My best resource - Simon Gladman who wrote the excellent Core Image for Swift (be careful, it's Swift 2) - explains a lot of what I believe is going on, but never really stressed why it is the case.

iOS设备使用GPU。模拟器没有

An iOS device uses the GPU. A simulator does not.

我相信它比这更复杂,并且涉及优化。事情是这样的-虽然您可以在 macOS 中使用CoreImage,但如果使用的是模拟器,则目标是 iOS 。因此,在使用CoreImage的macOS项目可能表现良好的情况下,如果是iOS项目,则需要使用真实的设备来获得真实的性能感觉。

I'm sure it's more complex than that and involves optimization. But the thing is this - while you can use CoreImage in macOS, if you are using the simulator you are targeting iOS. So where a macOS project using CoreImage may perform well, if it's an iOS project you need to use a real device to get a real feel for performance.

这篇关于核心图像GPU性能太慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆