如何在 iOS 中释放 CGImageRef [英] How do I release a CGImageRef in iOS

查看:30
本文介绍了如何在 iOS 中释放 CGImageRef的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写此方法来计算图像的平均 R、G、B 值.以下方法将 UIImage 作为输入并返回包含输入图像的 R、G、B 值的数组.我有一个问题:如何/在哪里正确释放 CGImageRef?

-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image{CGImageRef rawImageRef = [图像 CGImage];//该函数返回原始像素值const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));NSUInteger imageHeight = CGImageGetHeight(rawImageRef);NSUInteger imageWidth = CGImageGetWidth(rawImageRef);//这里我对R、G、B值进行排序并得到整个图像的平均值int i = 0;无符号整数红色 = 0;无符号整数绿色 = 0;无符号整数蓝色 = 0;for (int column = 0; column

我尝试了两件事:选项1:我保持原样,但经过几个周期(5+)后,程序崩溃,我收到内存不足警告错误"

选项 2:我添加一行CGImageRelease(rawImageRef)在方法返回之前.现在它在第二个周期后崩溃,我收到了传递给该方法的 UIImage 的 EXC_BAD_ACCESS 错误.当我尝试在 Xcode 中分析(而不是 RUN)时,我在这一行收到以下警告调用者此时不拥有的对象的引用计数不正确递减"

我应该在哪里以及如何释放 CGImageRef?

谢谢!

解决方案

正如其他人所说,您的内存问题是由复制的数据引起的.但这里还有一个想法:使用 Core Graphics 优化的像素插值来计算平均值.

  1. 创建一个 1x1 位图上下文.
  2. 将插值质量设置为中等(见下文).
  3. 将您的图像缩小到这个像素.
  4. 从上下文的缓冲区中读取 RGB 值.
  5. (当然,释放上下文.)

这可能会带来更好的性能,因为 Core Graphics 已经过高度优化,甚至可能会使用 GPU 进行缩减.

测试表明,中等质量似乎通过取颜色值的平均值来插入像素.这就是我们想要的.

至少值得一试.

好吧,这个想法太有趣了,不能尝试.所以.

I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?

-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
{
    CGImageRef rawImageRef = [image CGImage];

    //This function returns the raw pixel values
    const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));

    NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
    NSUInteger imageWidth = CGImageGetWidth(rawImageRef);

    //Here I sort the R,G,B, values and get the average over the whole image
    int i = 0;
    unsigned int red = 0;
    unsigned int green = 0;
    unsigned int blue = 0;

    for (int column = 0; column< imageWidth; column++)
    {
        int r_temp = 0;
        int g_temp = 0;
        int b_temp = 0;

        for (int row = 0; row < imageHeight; row++) {
            i = (row * imageWidth + column)*4;
            r_temp += (unsigned int)rawPixelData[i];
            g_temp += (unsigned int)rawPixelData[i+1];
            b_temp += (unsigned int)rawPixelData[i+2];

        }

        red += r_temp;
        green += g_temp;
        blue += b_temp;

    }

    NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
    NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
    NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];


    //Then I store the result in an array
    NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];


    return result;
}

I tried two things: Option 1: I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"

Option 2: I add one line CGImageRelease(rawImageRef) before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line "Incorrect decrement of the reference count of an object that is not owned at this point by the caller"

Where and how should I release the CGImageRef?

Thanks!

解决方案

Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.

  1. Create a 1x1 bitmap context.
  2. Set the interpolation quality to medium (see later).
  3. Draw your image scaled down to exactly this one pixel.
  4. Read the RGB value from the context's buffer.
  5. (Release the context, of course.)

This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.

Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.

Worth a try, at least.

Edit: OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.

It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.

I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.

Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.

Note: Here's a late follow up on the example image above.

这篇关于如何在 iOS 中释放 CGImageRef的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆