将 UIImage 裁剪为 alpha [英] Crop UIImage to alpha

查看:17
本文介绍了将 UIImage 裁剪为 alpha的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个相当大的、几乎全屏的图像,我将在 iPad 上显示它.图像大约 80% 透明.我需要在客户端确定不透明像素的边界框,然后裁剪到该边界框.

I have a rather large, almost full screen image that I'm going to be displaying on an iPad. The image is about 80% transparent. I need to, on the client, determine the bounding box of the opaque pixels, and then crop to that bounding box.

在 StackOverflow 上扫描其他问题并阅读一些 CoreGraphics 文档,我想我可以通过以下方式完成:

Scanning other questions here on StackOverflow and reading some of the CoreGraphics docs, I think I could accomplish this by:

CGBitmapContextCreate(...) // Use this to render the image to a byte array

 ..
   - iterate through this byte array to find the bounding box
 ..

CGImageCreateWithImageInRect(image, boundingRect);

这似乎非常低效且笨拙.我可以用 CGImage 蒙版或利用设备的图形加速来做这件事吗?

That just seems very inefficient and clunky. Is there something clever I can do with CGImage masks or something which makes use of the device's graphics acceleration to do this?

推荐答案

没有聪明的作弊方法可以绕过让设备完成工作,但是有一些方法可以加速任务,或者尽量减少对用户界面的影响.

There is no clever cheat to get around having the device do the work, but there are some ways to accelerate the task, or minimize the impact on the user interface.

首先,考虑加速这项任务的需要.通过这个字节数组的简单迭代可能会足够快.如果应用程序每次运行只计算一次,或者响应用户的选择(两次选择之间至少需要几秒钟),则可能无需投资优化此任务.

First, consider the need to accelerate this task. A simple iteration through this byte array may go fast enough. There may be no need to invest in optimizing this task if the app is just calculating this once per run or in reaction to a user's choice that takes at least a few seconds between choices.

如果图像可用后一段时间内不需要边界框,则可以在单独的线程中启动此迭代.这样计算就不会阻塞主界面线程.Grand Central Dispatch 可能会更轻松地使用单独的线程来完成此任务.

If the bounding box is not needed for some time after the image becomes available, this iteration may be launched in a separate thread. That way the calculation doesn't block the main interface thread. Grand Central Dispatch may make using a separate thread for this task easier.

如果必须加速任务,也许这是视频图像的实时处理,那么数据的并行处理可能会有所帮助.Accelerate 框架可能有助于对数据设置 SIMD 计算.或者,要真正通过此迭代获得性能,使用 NEON SIMD 操作的 ARM 汇编语言代码可以通过大量的开发工作获得很好的结果.

If the task must be accelerated, maybe this is real time processing of video images, then parallel processing of the data may help. The Accelerate framework may help in setting up SIMD calculations on the data. Or, to really get performance with this iteration, ARM assembly language code using the NEON SIMD operations could get great results with significant development effort.

最后的选择是研究更好的算法.在检测图像中的特征方面有大量的工作.边缘检测算法可能比通过字节数组的简单迭代更快.也许苹果将来会在 Core Graphics 中添加边缘检测功能,可以应用于这种情况.Apple 实现的图像处理功能可能与这种情况不完全匹配,但 Apple 的实现应进行优化以使用 iPad 的 SIMD 或 GPU 功能,从而获得更好的整体性能.

The last choice is to investigate a better algorithm. There's a huge body of work on detecting features in images. An edge detection algorithm may be faster than a simple iteration through the byte array. Maybe Apple will add edge detection capabilities to Core Graphics in the future which can be applied to this case. An Apple implemented image processing capability may not be an exact match for this case, but Apple's implementation should be optimized to use the SIMD or GPU capabilities of the iPad, resulting in better overall performance.

这篇关于将 UIImage 裁剪为 alpha的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆