裁剪UIImage到alpha [英] Crop UIImage to alpha

查看:97
本文介绍了裁剪UIImage到alpha的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个相当大的,几乎全屏的图像,我将在iPad上显示。图像大约80%透明。我需要在客户端上确定不透明像素的边界框,然后裁剪到该边界框。

I have a rather large, almost full screen image that I'm going to be displaying on an iPad. The image is about 80% transparent. I need to, on the client, determine the bounding box of the opaque pixels, and then crop to that bounding box.

在StackOverflow上扫描其他问题并阅读其中一些CoreGraphics文档,我想我可以通过以下方式实现:

Scanning other questions here on StackOverflow and reading some of the CoreGraphics docs, I think I could accomplish this by:

CGBitmapContextCreate(...) // Use this to render the image to a byte array

 ..
   - iterate through this byte array to find the bounding box
 ..

CGImageCreateWithImageInRect(image, boundingRect);

这看起来非常低效且笨重。有没有什么聪明的我可以用CGImage蒙版或利用设备的图形加速来做到这一点?

That just seems very inefficient and clunky. Is there something clever I can do with CGImage masks or something which makes use of the device's graphics acceleration to do this?

推荐答案

那里让设备完成工作并不聪明,但有一些方法可以加速任务,或最大限度地减少对用户界面的影响。

There is no clever cheat to get around having the device do the work, but there are some ways to accelerate the task, or minimize the impact on the user interface.

首先,考虑加速这项任务的必要性。通过此字节数组的简单迭代可能足够快。如果应用程序每次运行只计算一次,或者对用户的选择做出反应,选择之间至少需要几秒钟,则可能没有必要投资优化此任务。

First, consider the need to accelerate this task. A simple iteration through this byte array may go fast enough. There may be no need to invest in optimizing this task if the app is just calculating this once per run or in reaction to a user's choice that takes at least a few seconds between choices.

如果图像可用后一段时间内不需要边界框,则可以在单独的线程中启动此迭代。这样计算不会阻止主接口线程。 Grand Central Dispatch可以更轻松地使用单独的线程来完成此任务。

If the bounding box is not needed for some time after the image becomes available, this iteration may be launched in a separate thread. That way the calculation doesn't block the main interface thread. Grand Central Dispatch may make using a separate thread for this task easier.

如果任务必须加速,也许这是视频图像的实时处理,然后并行处理数据可能有所帮助。 Accelerate框架可以帮助设置数据的SIMD计算。或者,为了真正获得这次迭代的性能,使用NEON SIMD操作的ARM汇编语言代码可以通过大量的开发工作获得很好的结果。

If the task must be accelerated, maybe this is real time processing of video images, then parallel processing of the data may help. The Accelerate framework may help in setting up SIMD calculations on the data. Or, to really get performance with this iteration, ARM assembly language code using the NEON SIMD operations could get great results with significant development effort.

最后的选择是调查一个更好的算法。在检测图像中的特征方面有大量工作。边缘检测算法可能比通过字节阵列的简单迭代更快。也许Apple将来会为Core Graphics添加边缘检测功能,可以应用于这种情况。 Apple实现的图像处理功能可能与这种情况不完全匹配,但Apple的实施应该进行优化,以使用iPad的SIMD或GPU功能,从而提高整体性能。

The last choice is to investigate a better algorithm. There's a huge body of work on detecting features in images. An edge detection algorithm may be faster than a simple iteration through the byte array. Maybe Apple will add edge detection capabilities to Core Graphics in the future which can be applied to this case. An Apple implemented image processing capability may not be an exact match for this case, but Apple's implementation should be optimized to use the SIMD or GPU capabilities of the iPad, resulting in better overall performance.

这篇关于裁剪UIImage到alpha的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆