如何从3D转换的UIImageView创建/渲染UIImage? [英] How do I create/render a UIImage from a 3D transformed UIImageView?

查看:81
本文介绍了如何从3D转换的UIImageView创建/渲染UIImage?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

将一个3d变换应用到UIImageView.layer后,我需要将生成的视图保存为一个新的UIImage ......起初看起来像一个简单的任务:-)但到目前为止没有运气,并且搜索还没有提出了任何线索:-(所以我希望有人能够指出我正确的方向。

After applying a 3d transform to an UIImageView.layer, I need to save the resulting "view" as a new UIImage... Seemed like a simple task at first :-) but no luck so far, and searching hasn't turned up any clues :-( so I'm hoping someone will be kind enough to point me in the right direction.

一个非常简单的iPhone项目可用这里

A very simple iPhone project is available here.

谢谢。

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

/* FAIL : capturing layer contents doesn't get the transformed image -- just the original

CGImageRef newImageRef = (CGImageRef)imageView.layer.contents;

UIImage *image = [UIImage imageWithCGImage:newImageRef];

*/


/* FAIL : docs for renderInContext states that it does not render 3D transforms

UIGraphicsBeginImageContext(imageView.image.size);

[imageView.layer renderInContext:UIGraphicsGetCurrentContext()];

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

*/
//
// header
//
#import <QuartzCore/QuartzCore.h>
#define DEGREES_TO_RADIANS(x) x * M_PI / 180
UIImageView *imageView;
@property (nonatomic, retain) IBOutlet UIImageView *imageView;

//
// code
//
@synthesize imageView;

- (void)transformImage {
    float degrees = 12.0;
    float zDistance = 250;
    CATransform3D transform3D = CATransform3DIdentity;
    transform3D.m34 = 1.0 / zDistance; // the m34 cell of the matrix controls perspective, and zDistance affects the "sharpness" of the transform
    transform3D = CATransform3DRotate(transform3D, DEGREES_TO_RADIANS(degrees), 1, 0, 0); // perspective transform on y-axis
    imageView.layer.transform = transform3D;
}

- (UIImage *)captureView:(UIImageView *)view {
    UIGraphicsBeginImageContext(view.frame.size);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

- (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
    NSString *title = @"Save to Photo Album";
    NSString *message = (error ? [error description] : @"Success!");
    UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
    [alert show];
    [alert release];
}

- (IBAction)saveButtonClicked:(id)sender {
    UIImage *newImage = [self captureView:imageView];
    UIImageWriteToSavedPhotosAlbum(newImage, self, @selector(imageSavedToPhotosAlbum: didFinishSavingWithError: contextInfo:), nil);  
}


推荐答案

我最终创建了一个渲染使用视图变换的逆的方法在CPU上的每像素的方法像素。

I ended up creating a render method pixel per pixel on the CPU using the inverse of the view transform.

基本上,它将原始的UIImageView呈现为UIImage。然后UIImage中的每个像素乘以逆变换矩阵以生成变换后的UIImage。

Basically, it renders the original UIImageView into a UIImage. Then every pixel in the UIImage is multiplied by the inverse transform matrix to generate the transformed UIImage.

RenderUIImageView.h

RenderUIImageView.h

#import <UIKit/UIKit.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>

@interface RenderUIImageView : UIImageView

- (UIImage *)generateImage;

@end

RenderUIImageView.m

RenderUIImageView.m

#import "RenderUIImageView.h"

@interface RenderUIImageView()

@property (assign) CATransform3D transform;
@property (assign) CGRect rect;

@property (assign) float denominatorx;
@property (assign) float denominatory;
@property (assign) float denominatorw;

@property (assign) float factor;

@end

@implementation RenderUIImageView


- (UIImage *)generateImage
{

    _transform = self.layer.transform;

    _denominatorx = _transform.m12 * _transform.m21 - _transform.m11  * _transform.m22 + _transform.m14 * _transform.m22 * _transform.m41 - _transform.m12 * _transform.m24 * _transform.m41 - _transform.m14 * _transform.m21 * _transform.m42 +
    _transform.m11 * _transform.m24 * _transform.m42;

    _denominatory = -_transform.m12 *_transform.m21 + _transform.m11 *_transform.m22 - _transform.m14 *_transform.m22 *_transform.m41 + _transform.m12 *_transform.m24 *_transform.m41 + _transform.m14 *_transform.m21 *_transform.m42 -
    _transform.m11* _transform.m24 *_transform.m42;

    _denominatorw = _transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14 *_transform.m22 *_transform.m41 - _transform.m12 *_transform.m24 *_transform.m41 - _transform.m14 *_transform.m21 *_transform.m42 +
    _transform.m11 *_transform.m24 *_transform.m42;

    _rect = self.bounds;

    if (UIGraphicsBeginImageContextWithOptions != NULL) {

        UIGraphicsBeginImageContextWithOptions(_rect.size, NO, 0.0);
    } else {
        UIGraphicsBeginImageContext(_rect.size);
    }

    if ([[UIScreen mainScreen] respondsToSelector:@selector(displayLinkWithTarget:selector:)] &&
        ([UIScreen mainScreen].scale == 2.0)) {
        _factor = 2.0f;
    } else {
        _factor = 1.0f;
    }


    UIImageView *img = [[UIImageView alloc] initWithFrame:_rect];
    img.image = self.image;

    [img.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *source = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGContextRef ctx;
    CGImageRef imageRef = [source CGImage];
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    unsigned char *inputData = malloc(height * width * 4);
    unsigned char *outputData = malloc(height * width * 4);

    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;

    CGContextRef context = CGBitmapContextCreate(inputData, width, height,
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);

    context = CGBitmapContextCreate(outputData, width, height,
                                    bitsPerComponent, bytesPerRow, colorSpace,
                                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context);


    for (int ii = 0 ; ii < width * height ; ++ii)
    {
        int x = ii % width;
        int y = ii / width;
        int indexOutput = 4 * x + 4 * width * y;

        CGPoint p = [self modelToScreen:(x*2/_factor - _rect.size.width)/2.0 :(y*2/_factor - _rect.size.height)/2.0];

        p.x *= _factor;
        p.y *= _factor;

        int indexInput = 4*(int)p.x + (4*width*(int)p.y);

        if (p.x >= width || p.x < 0 || p.y >= height || p.y < 0 || indexInput >  width * height *4)
        {
            outputData[indexOutput] = 0.0;
            outputData[indexOutput+1] = 0.0;
            outputData[indexOutput+2] = 0.0;
            outputData[indexOutput+3] = 0.0;

        }
        else
        {
            outputData[indexOutput] = inputData[indexInput];
            outputData[indexOutput+1] = inputData[indexInput + 1];
            outputData[indexOutput+2] = inputData[indexInput + 2];
            outputData[indexOutput+3] = 255.0;
        }
    }

    ctx = CGBitmapContextCreate(outputData,CGImageGetWidth( imageRef ),CGImageGetHeight( imageRef ),8,CGImageGetBytesPerRow( imageRef ),CGImageGetColorSpace( imageRef ), kCGImageAlphaPremultipliedLast );

    imageRef = CGBitmapContextCreateImage (ctx);

    UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
    CGContextRelease(ctx);
    free(inputData);
    free(outputData);
    return rawImage;
}

- (CGPoint) modelToScreen : (float) x: (float) y
{
    float xp = (_transform.m22 *_transform.m41 - _transform.m21 *_transform.m42 - _transform.m22* x + _transform.m24 *_transform.m42 *x + _transform.m21* y - _transform.m24* _transform.m41* y) / _denominatorx;        
    float yp = (-_transform.m11 *_transform.m42 + _transform.m12 * (_transform.m41 - x) + _transform.m14 *_transform.m42 *x + _transform.m11 *y - _transform.m14 *_transform.m41* y) / _denominatory;        
    float wp = (_transform.m12 *_transform.m21 - _transform.m11 *_transform.m22 + _transform.m14*_transform.m22* x - _transform.m12 *_transform.m24* x - _transform.m14 *_transform.m21* y + _transform.m11 *_transform.m24 *y) / _denominatorw;

    CGPoint result = CGPointMake(xp/wp, yp/wp);
    return result;
}

@end

这篇关于如何从3D转换的UIImageView创建/渲染UIImage?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆