在AVCaptureVideoPreviewLayer中精确裁剪捕获的图像 [英] Cropping a captured image exactly to how it looks in AVCaptureVideoPreviewLayer

查看:117
本文介绍了在AVCaptureVideoPreviewLayer中精确裁剪捕获的图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个使用AV Foundation的照片应用。我使用AVCaptureVideoPreviewLayer设置了一个预览图层,它占据了屏幕的上半部分。因此,当用户尝试拍照时,他们只能看到屏幕的上半部分看到的内容。

I have a photo app that is using AV Foundation. I have setup a preview layer using AVCaptureVideoPreviewLayer that takes up the top half of the screen. So when the user is trying to take their photo, all they can see is what the top half of the screen sees.

这很有用,但是当用户真正拍摄时照片,我尝试将照片设置为图层的内容,图像失真。我做了研究并意识到我需要裁剪图像。

This works great, but when the user actually takes the photo and I try to set the photo as the layer's contents, the image is distorted. I did research and realized that I would need to crop the image.

我想做的就是裁剪完整的拍摄图像,这样剩下的就是用户最初可以在屏幕的上半部分看到。

All I want to do is crop the full captured image so that all that is left is exactly what the user could originally see in the top half of the screen.

我已经能够实现这一目标,但我这样做是通过输入手动CGRect值来实现的。仍然看起来并不完美。必须有一种更简单的方法来做到这一点。

I have been able to sort-of accomplish this but I am doing this by entering in manual CGRect values and it still does not look perfect. There has to be an easier way to do this.

在过去的两天里,关于裁剪图像,我在字面上经历了关于堆栈溢出的每个帖子,但没有任何工作。

I have literally gone through every post on stack overflow for the past 2 days about cropping images and nothing has worked.

必须有一种方法可以编程方式裁剪捕获的图像,以便最终图像与预览图层中最初看到的图像完全相同。

There has to be a way to programmatically crop the captured image so that the final image will be exactly what was originally seen in the preview layer.

这是我的viewDidLoad实现:

Here is my viewDidLoad implementation:

- (void)viewDidLoad
{
    [super viewDidLoad];

    AVCaptureSession *session =[[AVCaptureSession alloc]init];
    [session setSessionPreset:AVCaptureSessionPresetPhoto];

    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = [[NSError alloc]init];
    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];

    if([session canAddInput:deviceInput])
        [session addInput:deviceInput];

    CALayer *rootLayer = [[self view]layer];
    [rootLayer setMasksToBounds:YES];

    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    [rootLayer insertSublayer:_previewLayer atIndex:0];

    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    [session addOutput:_stillImageOutput];

    [session startRunning];
    }

以下是用户按下按钮捕获的代码照片:

And here is the code that runs when the user presses the button to capture a photo:

-(IBAction)stillImageCapture {
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in _stillImageOutput.connections){
        for (AVCaptureInputPort *port in [connection inputPorts]){
            if ([[port mediaType] isEqual:AVMediaTypeVideo]){
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    NSLog(@"about to request a capture from: %@", _stillImageOutput);

    [_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
        if(imageDataSampleBuffer) {
            NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

            UIImage *image = [[UIImage alloc]initWithData:imageData];
            CALayer *subLayer = [CALayer layer];
            subLayer.frame = _previewLayer.frame;
            image = [self rotate:image andOrientation:image.imageOrientation];

            //Below is the crop that is sort of working for me, but as you can see I am manually entering in values and just guessing and it still does not look perfect.
            CGRect cropRect = CGRectMake(0, 650, 3000, 2000);
            CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);

            subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
            subLayer.frame = _previewLayer.frame;

            [_previewLayer addSublayer:subLayer];
        }
    }];
}


推荐答案

看看 AVCaptureVideoPreviewLayer s

-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect

此方法可让您轻松将图层的可见CGRect转换为实际的相机输出。

This method lets you easily convert the visible CGRect of your layer to the actual camera output.

一个警告:物理相机没有安装在顶部朝上,而是顺时针旋转90度。 (所以如果你拿着你的iPhone - Home Button,相机实际上是正面朝上的。)

One caveat: The physical camera is not mounted "top side up", but rather rotated 90 degrees clockwise. (So if you hold your iPhone - Home Button right, the camera is actually top side up).

记住这一点,你必须转换CGRect上面的方法让你,将图像裁剪为屏幕上的内容。

Keeping this in mind, you have to convert the CGRect the above method gives you, to crop the image to exactly what is on screen.

示例:

CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];


CGSize originalSize = [originalImage size];

if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
    // For portrait images, swap the size of the image, because
    // here the output image is actually rotated relative to what you see on screen.

    CGFloat temp = originalSize.width;
    originalSize.width = originalSize.height;
    originalSize.height = temp;
}


// metaRect is fractional, that's why we multiply here

CGRect cropRect;

cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;

cropRect = CGRectIntegral(cropRect);

这可能有点令人困惑,但让我真正理解的是:

This may be a bit confusing, but what made me really understand it is this:

保持你的设备Home Button right - >你会看到x轴实际上位于iPhone的高度,而y轴位于宽度 你的iPhone。这就是为什么对于肖像图像,你必须交换大小;)

Hold your device "Home Button right" -> You'll see the x - axis actually lies along the "height" of your iPhone, while the y - axis lies along the "width" of your iPhone. That's why for portrait images, you have to swap the size ;)

这篇关于在AVCaptureVideoPreviewLayer中精确裁剪捕获的图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆