如何将图像从AVCapture裁剪到显示屏上看到的矩形 [英] How to crop an image from AVCapture to a rect seen on the display

查看:92
本文介绍了如何将图像从AVCapture裁剪到显示屏上看到的矩形的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这让我发疯,因为我无法让它发挥作用。我有以下情况:

This is driving me crazy because I can't get it to work. I have the following scenario:

我正在使用 AVCaptureSession AVCaptureVideoPreviewLayer 创建我自己的相机界面。界面显示一个矩形。以下是填充整个屏幕的 AVCaptureVideoPreviewLayer

I'm using an AVCaptureSession and an AVCaptureVideoPreviewLayer to create my own camera interface. The interface shows a rectangle. Below is the AVCaptureVideoPreviewLayer that fills the whole screen.

我希望以某种方式裁剪捕获的图像,结果图像显示在显示屏上的rect中看到的内容。

I want to the captured image to be cropped in a way, that the resulting image shows exactly the content seen in the rect on the display.

我的设置如下:

_session = [[AVCaptureSession alloc] init];
AVCaptureSession *session = _session;
session.sessionPreset = AVCaptureSessionPresetPhoto;

AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (camera == nil) {
    [self showImagePicker];
    _isSetup = YES;
    return;
}
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds;
[self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];

NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if (error) {
    HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:@[kFailedConnectingToCameraAlertViewRetryButtonTitle]];
    [av showWithBlock:^(NSString *buttonTitle){
        if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) {
            [self.delegate gloameCameraViewControllerDidCancel:self];
        }
        else {
            [self setupAVSession];
        }
    }];
}
[session addInput:input];

NSDictionary *options = @{ AVVideoCodecKey : AVVideoCodecJPEG };
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
[_stillImageOutput setOutputSettings:options];

[session addOutput:_stillImageOutput];

[session startRunning];
_isSetup = YES;

我正在捕捉这样的图像:

I'm capturing the image like this:

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
 {
     if (error) {
         MWLogDebug(@"Error capturing image from camera. %@, %@", error, [error userInfo]);
         _capturePreviewLayer.connection.enabled = YES;
     }
     else
     {
         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         UIImage *image = [[UIImage alloc] initWithData:imageData];

         CGRect cropRect = [self createCropRectForImage:image];
         UIImage *croppedImage;// = [self cropImage:image toRect:cropRect];
         UIGraphicsBeginImageContext(cropRect.size);
         [image drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)];
         croppedImage = UIGraphicsGetImageFromCurrentImageContext();
         UIGraphicsEndImageContext();
         self.capturedImage = croppedImage;
         [_session stopRunning];             
     }
 }];

createCropRectForImage:方法中我是尝试了各种方法来计算矩形切出图像,但到目前为止没有成功。

In the createCropRectForImage: method I've tried various ways to calculate the rect to cut out of the image, but with no success so far.

- (CGRect)createCropRectForImage:(UIImage *)image
{
    CGPoint maskTopLeftCorner = CGPointMake(self.maskRectView.frame.origin.x, self.maskRectView.frame.origin.y);
    CGPoint maskBottomRightCorner = CGPointMake(self.maskRectView.frame.origin.x + self.maskRectView.frame.size.width, self.maskRectView.frame.origin.y + self.maskRectView.frame.size.height);

    CGPoint maskTopLeftCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskTopLeftCorner fromLayer:self.maskRectView.layer.superlayer];
    CGPoint maskBottomRightCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskBottomRightCorner fromLayer:self.maskRectView.layer.superlayer];
    CGPoint maskTopLeftCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskTopLeftCornerInLayerCoords];
    CGPoint maskBottomRightCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskBottomRightCornerInLayerCoords];

    float x = maskTopLeftCornerInDeviceCoords.x * image.size.width;
    float y = (1 - maskTopLeftCornerInDeviceCoords.y) * image.size.height;
    float width = fabsf(maskTopLeftCornerInDeviceCoords.x - maskBottomRightCornerInDeviceCoords.x) * image.size.width;
    float height = fabsf(maskTopLeftCornerInDeviceCoords.y - maskBottomRightCornerInDeviceCoords.y) * image.size.height;

    return CGRectMake(x, y, width, height);
}

这是我目前的版本,但甚至没有正确的比例。有人可以帮助我!

That is my current version but doesn't even get the proportions right. Could some one please help me!

我也试过用这种方法裁剪我的形象:

I have also tried using this method to crop my image:

- (UIImage*)cropImage:(UIImage*)originalImage toRect:(CGRect)rect{

    CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);

    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
    CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
    CGContextRef bitmap = CGBitmapContextCreate(NULL, rect.size.width, rect.size.height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);

    if (originalImage.imageOrientation == UIImageOrientationLeft) {
        CGContextRotateCTM (bitmap, radians(90));
        CGContextTranslateCTM (bitmap, 0, -rect.size.height);

    } else if (originalImage.imageOrientation == UIImageOrientationRight) {
        CGContextRotateCTM (bitmap, radians(-90));
        CGContextTranslateCTM (bitmap, -rect.size.width, 0);

    } else if (originalImage.imageOrientation == UIImageOrientationUp) {
        // NOTHING
    } else if (originalImage.imageOrientation == UIImageOrientationDown) {
        CGContextTranslateCTM (bitmap, rect.size.width, rect.size.height);
        CGContextRotateCTM (bitmap, radians(-180.));
    }

    CGContextDrawImage(bitmap, CGRectMake(0, 0, rect.size.width, rect.size.height), imageRef);
    CGImageRef ref = CGBitmapContextCreateImage(bitmap);

    UIImage *resultImage=[UIImage imageWithCGImage:ref];
    CGImageRelease(imageRef);
    CGContextRelease(bitmap);
    CGImageRelease(ref);

    return resultImage;
}

是否有人使用正确组合的方法来完成这项工作? :)

Does anybody have the 'right combination' of methods to make this work? :)

推荐答案

我通过使用 metadataOutputRectOfInterestForRect 函数解决了这个问题。

I've solved this problem by using metadataOutputRectOfInterestForRect function.

它适用于任何方向。

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
                                               completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
 {
     if (error)
     {
         [_delegate cameraView:self error:@"Take picture failed"];
     }
     else
     {

         NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
         UIImage *takenImage = [UIImage imageWithData:jpegData];

         CGRect outputRect = [_previewLayer metadataOutputRectOfInterestForRect:_previewLayer.bounds];
         CGImageRef takenCGImage = takenImage.CGImage;
         size_t width = CGImageGetWidth(takenCGImage);
         size_t height = CGImageGetHeight(takenCGImage);
         CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);

         CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);
         takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];
         CGImageRelease(cropCGImage);

     }
 }
 ];

takenImage仍然是 imageOrientation 依赖图像。您可以删除方向信息以进行进一步的图像处理。

The takenImage is still imageOrientation dependent image. You can delete orientation information for further image processing.

UIGraphicsBeginImageContext(takenImage.size);
[takenImage drawAtPoint:CGPointZero];
takenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

这篇关于如何将图像从AVCapture裁剪到显示屏上看到的矩形的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆