UIImagePNGRepresentation和遮罩的图像 [英] UIImagePNGRepresentation and masked images

查看:138
本文介绍了UIImagePNGRepresentation和遮罩的图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

  1. 使用

  1. I created a masked image using a function form an iphone blog:

UIImage * imgToSave = [自身maskImage:[UIImage imageNamed:@"pic.jpg"] withMask:[UIImage imageNamed:@"sd-face-mask.png"]];

UIImage *imgToSave = [self maskImage:[UIImage imageNamed:@"pic.jpg"] withMask:[UIImage imageNamed:@"sd-face-mask.png"]];

在UIImageView中看起来不错

UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];

  • 要保存到磁盘的UIImagePNG表示形式:

    [UIImagePNGRepresentation(imgToSave)writeToFile:[self findUniqueSavePath]原子性:是];

    [UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];

    UIImagePNGRepresentation 返回看起来不同的图像的NSData.

    UIImagePNGRepresentation returns NSData of an image that looks different.

    输出是反图像蒙版. 现在在文件中可以看到在应用程序中切出的区域. 在应用程序中可见的区域现在已被删除.可见度相反.

    The output is inverse image mask. The area that was cut out in the app is now visible in the file. The area that was visible in the app is now removed. Visibility is opposite.

    我的遮罩旨在去除图片中的脸部区域以外的所有区域. UIImage在应用程序中看起来正确,但是在将其保存在磁盘上之后,该文件看起来却相反.脸被移走,但其他一切都在那里.

    My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.

    请让我知道您能否提供帮助!

    Please let me know if you can help!

    推荐答案

    我遇到了完全相同的问题,当我保存文件时,这是一种方法,但是返回到内存中的图像却恰好相反.

    I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.

    罪魁祸首解决方案是UIImagePNGRepresentation().它会在将应用程序内的图像保存到磁盘之前先对其进行修复,因此我只是插入了该功能,作为创建被屏蔽图像并返回该图像的最后一步.

    The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.

    这可能不是最优雅的解决方案,但它可以工作.我从我的应用程序中复制了一些代码并将其压缩,不确定下面的代码是否按原样工作,但如果不能,则关闭它……可能只是一些错字.

    This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.

    享受. :)

    // MyImageHelperObj.h
    
    @interface MyImageHelperObj : NSObject
    
    + (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
    + (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
    
    @end
    
    
    
    
    
    // MyImageHelperObj.m
    
    #import <QuartzCore/QuartzCore.h>
    #import "MyImageHelperObj.h"
    
    
    @implementation MyImageHelperObj
    
    
    + (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
    {
        // create image size rect
        CGRect newRect = CGRectZero;
        newRect.size = newSize;
    
        // draw source image
        UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
        [sourceImage drawInRect:newRect];
        UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    
        // draw mask image
        [maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
        maskImage = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
    
        // create grayscale version of mask image to make the "image mask"
        UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
        CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
        CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
        CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
        CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
        CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
        CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
    
        CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
        CGImageRelease(imageMask);
        newImage = [UIImage imageWithCGImage:maskedImage];
        CGImageRelease(maskedImage);
        return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
    }
    
    + (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
    {
        //create gray device colorspace.
        CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
        //create 8-bit bimap context without alpha channel.
        CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
        CGColorSpaceRelease(space);
        //Draw image.
        CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
        CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
        //Get image from bimap context.
        CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
        CGContextRelease(bitmapContext);
        //image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
        UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
        CGImageRelease(grayScaleImage);
        return image;
    }
    
    @end
    

    这篇关于UIImagePNGRepresentation和遮罩的图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆