在iOS中绘制部分图像的最有效方法 [英] Most efficient way to draw part of an image in iOS

查看:192
本文介绍了在iOS中绘制部分图像的最有效方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

给定 UIImage CGRect ,绘制最有效的方式(内存和时间)图像的一部分对应 CGRect (没有缩放)?

Given an UIImage and a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect (without scaling)?

作为参考,这就是我目前的情况这样做:

For reference, this is how I currently do it:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);    
    CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
    CGContextTranslateCTM(context, 0, rect.size.height);
    CGContextScaleCTM(context, 1.0, -1.0);
    CGContextDrawImage(context, rect, imageRef);
    CGImageRelease(imageRef);
}

不幸的是,对于中等大小的图像和高<$ c,这似乎非常慢$ c> setNeedsDisplay 频率。使用 UIImageView 的框架和 clipToBounds 可以产生更好的结果(灵活性更低)。

Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay frequency. Playing with UIImageView's frame and clipToBounds produces better results (with less flexibility).

推荐答案

我猜这样做是为了在屏幕上显示部分图像,因为你提到了 UIImageView 。优化问题总是需要专门定义。

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.

实际上,带有 clipsToBounds UIImageView 是存档目标的最快/最简单的方法之一目标只是剪切图像的矩形区域(不是太大)。此外,您不需要发送 setNeedsDisplay 消息。

Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.

或者您可以尝试输入 UIImageView 在空的 UIView 中,并在容器视图中设置剪裁。使用此技术,您可以通过在2D(缩放,旋转,平移)中设置 transform 属性来自由转换图像。

Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).

如果你需要3D转换,你仍然可以使用 CALayer masksToBounds 属性,但是使用 CALayer 通常不会给您带来额外的性能。

If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.

无论如何,您需要知道所有使用的低级细节它们适合优化。

Anyway, you need to know all of the low-level details to use them properly for optimization.

UIView 只是 CALayer 之上的一个薄层,它是在之上实现的OpenGL ,它是 GPU 的虚拟直接接口。这意味着UIKit正在被GPU加速。

UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

因此,如果您正确使用它们(我的意思是,在设计限制范围内),它的性能将与普通<$ c $一样好。 c> OpenGL 实现。如果您只使用几个图像来显示,那么使用 UIView 实现您将获得可接受的性能,因为它可以完全加速底层 OpenGL (其中意味着GPU加速)。

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).

无论如何,如果你需要极端优化,对于数百个动画精灵,使用精细调整的像素着色器,就像在游戏应用中一样,应该直接使用OpenGL,因为 CALayer 在较低级别缺少许多优化选项。无论如何,至少为了优化UI的东西,要比Apple更好。

Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.

你应该知道的是GPU加速。在所有最近的计算机中,只有GPU才能实现快速的图形性能。然后,重点是您使用的方法是否在GPU之上实现。

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

IMO, CGImage 绘图方法不是用GPU实现的。
我想我在Apple的文档中提到了这一点,但我不记得在哪里。所以我不确定这一点。无论如何,我相信 CGImage 是在CPU中实现的,因为,

IMO, CGImage drawing methods are not implemented with GPU. I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,


  1. 它的API看起来像它是为CPU设计的,例如位图编辑界面和文本绘图。它们不适合GPU接口。

  2. 位图上下文接口允许直接内存访问。这意味着它的后端存储位于CPU内存中。可能在统一内存架构(以及Metal API)上有所不同,但无论如何, CGImage 的初始设计意图应该是CPU。

  3. 许多人最近发布了明确提到GPU加速的其他Apple API。这意味着他们的旧API不是。如果没有特别提及,它通常在CPU中默认完成。

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.

所以它似乎是在CPU中完成的。在CPU中完成的图形操作比在GPU中慢很多。

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

简单地剪裁图像和合成图像层对GPU来说是非常简单和便宜的操作(与CPU相比),所以你可以期待UIKit库将使用它,因为整个UIKit是在OpenGL之上实现的。

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.

  • Here's another thread about whether the CoreGraphics on iOS is using OpenGL or not: iOS: is Core Graphics implemented on top of OpenGL?

因为优化是关于微观管理,具体数字和小事实的一种工作非常重要。什么是中等大小? iOS上的OpenGL通常将最大纹理大小限制为1024x1024像素(在最近的版本中可能更大)。如果你的图像比这个大,它将无法工作,或者性能会大大降低(我认为UIImageView针对极限范围内的图像进行了优化)。

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

如果你需要要显示带剪辑的巨大图像,你必须使用另一个优化,如 CATiledLayer ,这是一个完全不同的故事。

If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.

除非你想了解OpenGL的每一个细节,否则不要去OpenGL。它需要充分了解低级图形和至少100倍的代码。

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.

虽然不太可能发生,但 CGImage 东西(或其他任何东西)不需要只停留在CPU中。不要忘记检查您正在使用的API的基本技术。尽管如此,GPU的东西与CPU完全不同,然后API人员通常会明确地提及它们。

Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.

这篇关于在iOS中绘制部分图像的最有效方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆