如何在iOS中发布CGImageRef [英] How do I release a CGImageRef in iOS

查看:141
本文介绍了如何在iOS中发布CGImageRef的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写此方法来计算图像的平均R,G,B值。以下方法将UIImage作为输入,并返回包含输入图像的R,G,B值的数组。我有一个问题:如何/我在哪里正确发布CGImageRef?

I am writing this method to calculate the average R,G,B values of an image. The following method takes a UIImage as an input and returns an array containing the R,G,B values of the input image. I have one question though: How/Where do I properly release the CGImageRef?

-(NSArray *)getAverageRGBValuesFromImage:(UIImage *)image
{
    CGImageRef rawImageRef = [image CGImage];

    //This function returns the raw pixel values
    const UInt8 *rawPixelData = CFDataGetBytePtr(CGDataProviderCopyData(CGImageGetDataProvider(rawImageRef)));

    NSUInteger imageHeight = CGImageGetHeight(rawImageRef);
    NSUInteger imageWidth = CGImageGetWidth(rawImageRef);

    //Here I sort the R,G,B, values and get the average over the whole image
    int i = 0;
    unsigned int red = 0;
    unsigned int green = 0;
    unsigned int blue = 0;

    for (int column = 0; column< imageWidth; column++)
    {
        int r_temp = 0;
        int g_temp = 0;
        int b_temp = 0;

        for (int row = 0; row < imageHeight; row++) {
            i = (row * imageWidth + column)*4;
            r_temp += (unsigned int)rawPixelData[i];
            g_temp += (unsigned int)rawPixelData[i+1];
            b_temp += (unsigned int)rawPixelData[i+2];

        }

        red += r_temp;
        green += g_temp;
        blue += b_temp;

    }

    NSNumber *averageRed = [NSNumber numberWithFloat:(1.0*red)/(imageHeight*imageWidth)];
    NSNumber *averageGreen = [NSNumber numberWithFloat:(1.0*green)/(imageHeight*imageWidth)];
    NSNumber *averageBlue = [NSNumber numberWithFloat:(1.0*blue)/(imageHeight*imageWidth)];


    //Then I store the result in an array
    NSArray *result = [NSArray arrayWithObjects:averageRed,averageGreen,averageBlue, nil];


    return result;
}

我尝试了两件事:
选项1:
我保持原样,但是经过几个周期(5+)后程序崩溃,我得到低内存警告错误

I tried two things: Option 1: I leave it as it is, but then after a few cycles (5+) the program crashes and I get the "low memory warning error"

选项2:
我在方法返回之前添加一行
CGImageRelease(rawImageRef)
。现在它在第二个周期后崩溃,我得到了传递给方法的UIImage的EXC_BAD_ACCESS错误。当我尝试在Xcode中分析(而不是RUN)时,我在此行收到以下警告
调用者此时不拥有的对象的引用计数的不正确的减少

Option 2: I add one line CGImageRelease(rawImageRef) before the method returns. Now it crashes after the second cycle, I get the EXC_BAD_ACCESS error for the UIImage that I pass to the method. When I try to analyze (instead of RUN) in Xcode I get the following warning at this line "Incorrect decrement of the reference count of an object that is not owned at this point by the caller"

我应该在哪里以及如何发布CGImageRef?

Where and how should I release the CGImageRef?

谢谢!

推荐答案

正如其他人所说,你的记忆问题是由复制的数据引起的。但这是另一个想法:使用Core Graphics的优化像素插值来计算平均值。

Your memory issue results from the copied data, as others have stated. But here's another idea: Use Core Graphics's optimized pixel interpolation to calculate the average.


  1. 创建1x1位图上下文。

  2. 将插值质量设置为中等(见下文)。

  3. 将缩小的图像缩小到这一个像素。

  4. 读取来自上下文缓冲区的RGB值。

  5. (当然,释放上下文。)

  1. Create a 1x1 bitmap context.
  2. Set the interpolation quality to medium (see later).
  3. Draw your image scaled down to exactly this one pixel.
  4. Read the RGB value from the context's buffer.
  5. (Release the context, of course.)

这可能会带来更好的性能,因为Core Graphics经过高度优化,甚至可以使用GPU进行缩减。

This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.

测试显示中等质量似乎通过采用颜色值的平均值。这就是我们想要的。

Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That's what we want here.

值得一试,至少。

编辑:好吧,这个想法似乎太有趣了,不要试试。因此这是一个示例项目显示了差异。使用包含的512x512测试图像进​​行以下测量,但是如果需要,可以更改图像。

OK, this idea seemed too interesting not to try. So here's an example project showing the difference. Below measurements were taken with the contained 512x512 test image, but you can change the image if you want.

通过迭代所有像素计算平均值大约需要12.2 ms在图像数据中。绘制到一个像素的方法需要3毫秒,所以它大约快4倍。当使用 kCGInterpolationQualityMedium 时,它似乎产生相同的结果。

It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it's roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.

我认为巨大的性能提升是由于Quartz注意到它不必完全解压缩JPEG,但它只能使用DCT的低频部分。当编写尺寸小于0.5的JPEG压缩像素时,这是一个有趣的优化策略。但我只是在这里猜测。

I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That's an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I'm only guessing here.

有趣的是,当使用你的方法时,70%的时间花在 CGDataProviderCopyData 并且在像素数据遍历中只有30%。这暗示了花在JPEG解压缩上的大量时间。

Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.

这篇关于如何在iOS中发布CGImageRef的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆