CoreImage/CoreVideo中的内存泄漏 [英] Memory leak in CoreImage/CoreVideo

查看:139
本文介绍了CoreImage/CoreVideo中的内存泄漏的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在构建一个执行一些基本检测的iOS应用. 我从AVCaptureVideoDataOutput获取原始帧,将CMSampleBufferRef转换为UIImage,调整UIImage的大小,然后将其转换为CVPixelBufferRef. 据我所能检测到的仪器泄漏是将CGImage转换为CVPixelBufferRef的最后一部分.

I'm build an iOS app that does some basic detection. I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef. As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.

这是我使用的代码:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{
    videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
    // ASMotionDetect is my class for detection and I use videof to calculate the movement
}

-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
    UIImage *img;
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer 

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 
    CGContextRelease(newContext); 

    CGColorSpaceRelease(colorSpace); 
    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    img = [UIImage imageWithCGImage:newImage];
    CGImageRelease(newImage);
    newContext = nil;
    img = [self resizeImageToSquare:img];
    return img;
}

-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
    UIImage *img;
    int w = _temp.size.width;
    int h = _temp.size.height;
    CGRect rect;
    if (w>h) {
        rect = CGRectMake((w-h)/2,0,h,h);
    } else {
        rect = CGRectMake(0, (h-w)/2, w, w);
    }
    //
    img = [self crop:_temp inRect:rect];
    return img;
}

-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
    UIImage *sourceImage = image;
    CGRect selectionRect = rect;
    CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
    CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
    UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
    CGImageRelease(resultImageRef);
    return resultImage;
}

在我的检测班上,我有:

And in my detection class I have:

- (id)initWithSampleImage:(UIImage*)sampleImage {
  if ((self = [super init])) {
    _frame = new CVMatOpaque();
    _histograms = new CVMatNDOpaque[kGridSize *
                                    kGridSize];
    [self extractFrameFromImage:sampleImage];
  }
  return self;
}

- (void)extractFrameFromImage:(UIImage*)sampleImage {
    CGImageRef imageRef = [sampleImage CGImage];
    CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
  // Collect some information required to extract the frame.
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);

  // Extract the frame, convert it to grayscale, and shove it in _frame.
    cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
    cv::cvtColor(frame, frame, CV_BGR2GRAY);
    _frame->matrix = frame;
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    CGImageRelease(imageRef);
}

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    CVPixelBufferRef pxbuffer = NULL;
    int width = CGImageGetWidth(image)*2;
    int height = CGImageGetHeight(image)*2;

    NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
    CVPixelBufferPoolRef pixelBufferPool; 
    CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
    NSParameterAssert(theError == kCVReturnSuccess);
    CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                                 height, 8, width*4, rgbColorSpace, 
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
/* here is the problem: */
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

使用Instrument我发现问题出在CVPixelBufferRef分配上,但我不明白为什么-有人可以看到问题吗?

With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?

谢谢

推荐答案

-pixelBufferFromCGImage:中,不会同时释放pxBufferpixelBufferPool.这对于pxBuffer是有意义的,因为它是一个返回值,但对于pixelBufferPool却不适用–您每次调用该方法都会创建并泄漏一个.

In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.

一个快速解决方法应该是

A quick fix should be to

  1. 发布-pixelBufferFromCGImage:中的pixelBufferPool
  2. -extractFrameFromImage:中释放pxBuffer(-pixelBufferFromCGImage:的返回值)
  1. Release pixelBufferPool in -pixelBufferFromCGImage:
  2. Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:

您还应该将-pixelBufferFromCGImage:重命名为-createPixelBufferFromCGImage:,以明确其返回保留的对象.

You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.

这篇关于CoreImage/CoreVideo中的内存泄漏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆