为什么这个代码比天真的方法更好地解压缩UIImage? [英] Why does this code decompress a UIImage so much better than the naive approach?

查看:143
本文介绍了为什么这个代码比天真的方法更好地解压缩UIImage?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的应用程序中,我需要加载大型JPEG图像并在滚动视图中显示它们。为了保持UI响应,我决定在后台加载图像,然后在主线程上显示它们。为了在后台完全加载它们,我强制每个图像被解压缩。我 使用此代码解压缩图像(注意我的应用只是iOS 7,所以我知道在后台线程上使用这些方法是可以的):

In my app I need to load large JPEG images and display them in a scroll view. In order to keep the UI responsive, I decided to load the images in the background, then display them on the main thread. In order to fully load them in the background, I force each image to be decompressed. I was using this code to decompress an image (note that my app is iOS 7 only, so I understand that using these methods on a background thread is OK):

+ (UIImage *)decompressedImageFromImage:(UIImage *)image {
    UIGraphicsBeginImageContextWithOptions(image.size, YES, 0);
    [image drawAtPoint:CGPointZero];
    UIImage *decompressedImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return decompressedImage;
}

然而,我仍然有很长的加载时间,UI口吃,以及很多记忆压力。我刚刚找到另一种解决方案

However, I still had long load times, UI stutter, and a lot of memory pressure. I just found another solution:

+ (UIImage *)decodedImageWithImage:(UIImage *)image {
    CGImageRef imageRef = image.CGImage;
    // System only supports RGB, set explicitly and prevent context error
    // if the downloaded image is not the supported format
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef context = CGBitmapContextCreate(NULL,
                                                 CGImageGetWidth(imageRef),
                                                 CGImageGetHeight(imageRef),
                                                 8,
                                                 // width * 4 will be enough because are in ARGB format, don't read from the image
                                                 CGImageGetWidth(imageRef) * 4,
                                                 colorSpace,
                                                 // kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
                                                 // makes system don't need to do extra conversion when displayed.
                                                 kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
    CGColorSpaceRelease(colorSpace);

    if ( ! context) {
        return nil;
    }
    CGRect rect = (CGRect){CGPointZero, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)};
    CGContextDrawImage(context, rect, imageRef);
    CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
    CGContextRelease(context);
    UIImage *decompressedImage = [[UIImage alloc] initWithCGImage:decompressedImageRef];
    CGImageRelease(decompressedImageRef);
    return decompressedImage;
}

此代码的数量级更好。图像几乎立即加载,没有UI口吃,内存使用量也有所下降。

This code is orders of magnitude better. The image loads almost immediately, there is no UI stutter, and the memory usage has gone way down.

所以我的问题是双重的:

So my question is two-fold:


  1. 为什么第二种方法比第一种方法好得多?

  2. 如果第二种方法由于唯一参数更好该设备有没有办法确保它对所有iOS设备,现在和将来都能同样有效?我不想假设我的原生位图格式发生了变化,重新引入了这个问题。


推荐答案

我假设你在Retina设备上运行它。在 UIGraphicsBeginImageContextWithOptions 中,您询问了默认比例,即主屏幕的比例,即2.这意味着它生成的位图大小为4倍。在第二个函数中,你以1x的比例绘制。

I assume that you're running this on a Retina device. In UIGraphicsBeginImageContextWithOptions, you asked for the default scale, which is the scale of the main screen, which is 2. This means that it's generating a bitmap 4x as large. In the second function, you're drawing at 1x scale.

尝试将1的标度传递给 UIGraphicsBeginImageContextWithOptions 并且看看你的表现是否相似。

Try passing a scale of 1 to UIGraphicsBeginImageContextWithOptions and see if your performance is similar.

这篇关于为什么这个代码比天真的方法更好地解压缩UIImage?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆