根据UIView / CGRect的大小将图像裁剪为正方形 [英] Crop image to a square according to the size of a UIView/CGRect

查看:131
本文介绍了根据UIView / CGRect的大小将图像裁剪为正方形的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个AVCaptureSession的实现,我的目标是让用户拍照并仅将图像的一部分保存在红色正方形边框内,如下所示:





AVCaptureSession的 previewLayer (摄像机)的范围从(0,0)(左上)到我的摄像机控件的底部条(包含快门的视图上方的条)。



我正在使用 [captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; ,以确保保存到相机胶卷的原始图像就像苹果的相机一样。



用户将能够以纵向拍摄照片,左右横向,因此裁剪方法必须考虑到这一点。



到目前为止,我已经尝试使用以下代码裁剪原始图像:

  DDLogVerbose(@%@:图像裁剪rect:(%f,%f,%f,%f),THIS_FILE,self .imageCropRect.origin.x,self.imageCropRect.origin.y,self.imageCropRect.size.width,self.imageCropRect.size.height); 

//创建新的图像上下文(视网膜安全)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.imageCropRect.size.width,self.imageCropRect.size.width),NO,0.0);

//为图像
创建rect CGRect rect = self.imageCropRect;

//将图像绘制到矩形
[self.captureManager.stillImage drawInRect:rect];

//保存图像,结束图像上下文
UIImage * croppedImage = UIGraphicsGetImageFromCurrentImageContext();

但是,当我看着相机胶卷中的裁切图像时,它似乎已经被压扁了原始图像,并且不会像我想要的那样丢弃图像的顶部和底部。还会由于 CGRect 的y位置而在裁剪的图像顶部产生53像素的空白。

这是我的 CGRect 的日志输出:

 图像裁剪rect:(0.000000、53.000000、320.000000、322.000000)

超级视图中带有红色边框的视图。



我忽略了一些重要的事情吗?



P.S。原始图像尺寸(使用纵向模式的相机拍摄)为:

 原始图像尺寸:(2448.000000,3264.000000)


解决方案

您可以使用 CGImageCreateWithImageInRect

  CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage],界限); 
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);


I have an implementation of AVCaptureSession and my goal is for the user to take a photo and only save the part of the image within the red square border, as shown below:

AVCaptureSession's previewLayer (the camera) spans from (0,0) (top left) to the bottom of my camera controls bar (the bar just above the view that contains the shutter). My navigation bar and controls bar are semi-transparent, so the camera can show through.

I'm using [captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; to ensure that the original image being saved to the camera roll is like Apple's camera.

The user will be able to take the photo in portrait, landscape left and right, so the cropping method must take this into account.

So far, I've tried to crop the original image using this code:

DDLogVerbose(@"%@: Image crop rect: (%f, %f, %f, %f)", THIS_FILE, self.imageCropRect.origin.x, self.imageCropRect.origin.y, self.imageCropRect.size.width, self.imageCropRect.size.height);

// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.imageCropRect.size.width, self.imageCropRect.size.width), NO, 0.0);

// Create rect for image
CGRect rect = self.imageCropRect;

// Draw the image into the rect
[self.captureManager.stillImage drawInRect:rect];

// Saving the image, ending image context
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();

However, when I look at the cropped image in the camera roll, it seems that it has just squashed the original image, and not discarded the top and bottom parts of the image like I'd like. It also results in 53 pixels of white space at the top of the "cropped" image, likely because of the y position of my CGRect.

This is my logging output for the CGRect:

 Image crop rect: (0.000000, 53.000000, 320.000000, 322.000000)

This also describes the frame of the red bordered view in the superview.

Is there something crucial I'm overlooking?

P.S. The original image size (taken with a camera in portrait mode) is:

Original image size: (2448.000000, 3264.000000)

解决方案

You can crop images with CGImageCreateWithImageInRect:

CGImageRef imageRef = CGImageCreateWithImageInRect([uncroppedImage CGImage], bounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);

这篇关于根据UIView / CGRect的大小将图像裁剪为正方形的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆