在iOS中加载非延迟图像 [英] Non-lazy image loading in iOS
问题描述
我正在尝试在后台线程中加载UIImages,然后在iPad上显示它们。但是,当我将imageViews的视图属性设置为图像时,会出现断断续续的情况。
我很快就发现iOS上的图片加载很懒,在这个问题中找到了部分解决方案:
I'm trying to load UIImages in a background thread and then display them on the iPad. However, there's a stutter when I set the imageViews' view property to the image. I soon figured out that image loading is lazy on iOS, and found a partial solution in this question:
在UI线程上懒洋洋地加载CGImage / UIImage导致口吃
这实际上会强制图像在线程中加载,但在显示图像时仍然存在断断续续的情况。
This actually forces the image to be loaded in the thread, but there's still a stutter when displaying the image.
您可以在此处找到我的示例项目: http://www.jasamer.com/files/SwapTest.zip (编辑: 修正版),检查SwapTestViewController。尝试拖动图片以查看口吃。
You can find my sample project here: http://www.jasamer.com/files/SwapTest.zip (edit: fixed version), check the SwapTestViewController. Try dragging the picture to see the stutter.
我创建的口令是这样的测试代码(forceLoad方法是我从上面发布的堆栈溢出问题中获取的方法):
The test-code I created that stutters is this (the forceLoad method is the one taken from the stack overflow question I posted above):
NSArray* imagePaths = [NSArray arrayWithObjects:
[[NSBundle mainBundle] pathForResource: @"a.png" ofType: nil],
[[NSBundle mainBundle] pathForResource: @"b.png" ofType: nil], nil];
NSOperationQueue* queue = [[NSOperationQueue alloc] init];
[queue addOperationWithBlock: ^(void) {
int imageIndex = 0;
while (true) {
UIImage* image = [[UIImage alloc] initWithContentsOfFile: [imagePaths objectAtIndex: imageIndex]];
imageIndex = (imageIndex+1)%2;
[image forceLoad];
//What's missing here?
[self performSelectorOnMainThread: @selector(setImage:) withObject: image waitUntilDone: YES];
[image release];
}
}];
我知道可以避免口吃的原因有两个:
There are two reasons why I know the stuttering can be avoided:
(1)Apple能够在照片应用程序中加载图像而不会出现口吃
(1) Apple is able to load images without stuttering in the Photos app
(2)此代码在占位符1和占位符2之后不会导致断断续续已在上述代码的此修改版本中显示一次:
(2) This code does not cause stutter after placeholder1 and placeholder2 have been displayed once in this modified version of the above code:
UIImage* placeholder1 = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource: @"a.png" ofType: nil]];
[placeholder1 forceLoad];
UIImage* placeholder2 = [[UIImage alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource: @"b.png" ofType: nil]];
[placeholder2 forceLoad];
NSArray* imagePaths = [NSArray arrayWithObjects:
[[NSBundle mainBundle] pathForResource: @"a.png" ofType: nil],
[[NSBundle mainBundle] pathForResource: @"b.png" ofType: nil], nil];
NSOperationQueue* queue = [[NSOperationQueue alloc] init];
[queue addOperationWithBlock: ^(void) {
int imageIndex = 0;
while (true) {
//The image is not actually used here - just to prove that the background thread isn't causing the stutter
UIImage* image = [[UIImage alloc] initWithContentsOfFile: [imagePaths objectAtIndex: imageIndex]];
imageIndex = (imageIndex+1)%2;
[image forceLoad];
if (self.imageView.image==placeholder1) {
[self performSelectorOnMainThread: @selector(setImage:) withObject: placeholder2 waitUntilDone: YES];
} else {
[self performSelectorOnMainThread: @selector(setImage:) withObject: placeholder1 waitUntilDone: YES];
}
[image release];
}
}];
但是,我无法将所有图像保存在内存中。
However, I can't keep all my images in memory.
这意味着forceLoad不能完成整个工作 - 在实际显示图像之前还会发生其他事情。
有谁知道那是什么,我怎么能把它放到后台线程?
This implies that forceLoad doesn't do the complete job - there's something else going on before the images are actually displayed. Does anyone know what that is, and how I can put that into the background thread?
谢谢,Julian
更新
使用了一些Tommys提示。我想到的是它的CGSConvertBGRA8888到RGBA8888需要花费很多时间,因此它似乎是导致延迟的颜色转换。
这是该方法的(反向)调用堆栈。
Used a few of Tommys tips. What I figured out is that it's CGSConvertBGRA8888toRGBA8888 that's taking so much time, so it seems it's a color conversion that's causing the lag. Here's the (inverted) call stack of that method.
Running Symbol Name
6609.0ms CGSConvertBGRA8888toRGBA8888
6609.0ms ripl_Mark
6609.0ms ripl_BltImage
6609.0ms RIPLayerBltImage
6609.0ms ripc_RenderImage
6609.0ms ripc_DrawImage
6609.0ms CGContextDelegateDrawImage
6609.0ms CGContextDrawImage
6609.0ms CA::Render::create_image_by_rendering(CGImage*, CGColorSpace*, bool)
6609.0ms CA::Render::create_image(CGImage*, CGColorSpace*, bool)
6609.0ms CA::Render::copy_image(CGImage*, CGColorSpace*, bool)
6609.0ms CA::Render::prepare_image(CGImage*, CGColorSpace*, bool)
6609.0ms CALayerPrepareCommit_(CALayer*, CA::Transaction*)
6609.0ms CALayerPrepareCommit_(CALayer*, CA::Transaction*)
6609.0ms CALayerPrepareCommit_(CALayer*, CA::Transaction*)
6609.0ms CALayerPrepareCommit_(CALayer*, CA::Transaction*)
6609.0ms CALayerPrepareCommit
6609.0ms CA::Context::commit_transaction(CA::Transaction*)
6609.0ms CA::Transaction::commit()
6609.0ms CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*)
6609.0ms __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__
6609.0ms __CFRunLoopDoObservers
6609.0ms __CFRunLoopRun
6609.0ms CFRunLoopRunSpecific
6609.0ms CFRunLoopRunInMode
6609.0ms GSEventRunModal
6609.0ms GSEventRun
6609.0ms -[UIApplication _run]
6609.0ms UIApplicationMain
6609.0ms main
最后他提出的位掩码更改并没有改变任何东西,遗憾的是。
The last bit-mask changes he proposed didn't change anything, sadly.
推荐答案
UIKit只能在主线程上使用。因此,您的代码在技术上无效,因为您从主线程以外的线程使用UIImage。您应该单独使用CoreGraphics在后台线程上加载(并且非延迟解码)图形,将CGImageRef发布到主线程并将其转换为UIImage。在您当前的实现中,它似乎可以工作(尽管有你不想要的口吃),但不能保证。围绕这个领域似乎有很多迷信和不良做法,所以你设法找到一些不好的建议就不足为奇了......
UIKit may be used on the main thread only. Your code is therefore technically invalid, since you use UIImage from a thread other than the main thread. You should use CoreGraphics alone to load (and non-lazily decode) graphics on a background thread, post the CGImageRef to the main thread and turn it into a UIImage there. It may appear to work (albeit with the stutter you don't want) in your current implementation, but it isn't guaranteed to. There seems to be a lot of superstition and bad practice advocated around this area, so it's not surprising you've managed to find some bad advice...
建议运行在后台线程上:
Recommended to run on a background thread:
// get a data provider referencing the relevant file
CGDataProviderRef dataProvider = CGDataProviderCreateWithFilename(filename);
// use the data provider to get a CGImage; release the data provider
CGImageRef image = CGImageCreateWithPNGDataProvider(dataProvider, NULL, NO,
kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
// make a bitmap context of a suitable size to draw to, forcing decode
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
unsigned char *imageBuffer = (unsigned char *)malloc(width*height*4);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext =
CGBitmapContextCreate(imageBuffer, width, height, 8, width*4, colourSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colourSpace);
// draw the image to the context, release it
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
// now get an image ref from the context
CGImageRef outputImage = CGBitmapContextCreateImage(imageContext);
// post that off to the main thread, where you might do something like
// [UIImage imageWithCGImage:outputImage]
[self performSelectorOnMainThread:@selector(haveThisImage:)
withObject:[NSValue valueWithPointer:outputImage] waitUntilDone:YES];
// clean up
CGImageRelease(outputImage);
CGContextRelease(imageContext);
free(imageBuffer);
如果您使用的是iOS 4或更高版本,则无需执行malloc / free,您可以只需将NULL作为CGBitmapContextCreate的相关参数传递,并让CoreGraphics整理出自己的存储空间。
There's no need to do the malloc/free if you're on iOS 4 or later, you can just pass NULL as the relevant parameter of CGBitmapContextCreate, and let CoreGraphics sort out its own storage.
这与你发布的解决方案不同,因为它:
This differs from the solution you post to because it:
- 从PNG数据源创建CGImage - 延迟加载适用,因此这不一定是完全加载和解压缩的图像
- 创建一个与PNG大小相同的位图上下文
- 将PNG数据源中的CGImage绘制到位图上下文中 - 这应该强制完全加载和解压缩,因为实际的颜色值必须放在我们可以从C数组访问它们的地方。此步骤就是您链接到的forceLoad。
- 将位图上下文转换为图像
- 将图像转移到主线程,大概是成为一个UIImage
- creates a CGImage from a PNG data source — lazy loading applies, so this isn't necessarily a fully loaded and decompressed image
- creates a bitmap context of the same size as the PNG
- draws the CGImage from the PNG data source onto the bitmap context — this should force full loading and decompression since the actual colour values have to be put somewhere we could access them from a C array. This step is as far as the forceLoad you link to goes.
- converts the bitmap context into an image
- posts that image off to the main thread, presumably to become a UIImage
所以加载的东西和显示的东西之间没有对象的连续性;像素数据通过一个C数组(因此,没有隐藏的恶作剧的机会),只有正确地放入数组才能生成最终图像。
So there's no continuity of object between the thing loaded and the thing displayed; pixel data goes through a C array (so, no opportunity for hidden shenanigans) and only if it was put into the array correctly is it possible to make the final image.
这篇关于在iOS中加载非延迟图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!