从Image&中删除背景在iOS中仅保存图像部分以进行保存 [英] Remove background from Image & take only Image part for save in iOS

查看:168
本文介绍了从Image&中删除背景在iOS中仅保存图像部分以进行保存的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我需要实现的目标:




  • 从相机或图库中拍摄图像

  • 从图像&中删除背景保存它

  • 背景应为黑色或白色

  • 还需要删除阴影和背景



结果示例:



原始图片





结果图片





这就是我的尝试:

  CGFloat colorMasking [6] = {222,255,222,255,222,255}; 
CGImageRef imageRef = CGImageCreateWithMaskingColors([IMG CGImage],colorMasking);
UIImage * resultThumbImage = [UIImage imageWithCGImage:imageRef scale:ThumbImage.scale orientation:IMG.imageOrientation];

它在白色背景上的唯一工作。它不是更有效。我需要达到我在上面图片中放置的确切结果。



我还提到了一些参考文献:




alpha matting


This is what i need to achieve:

  • Take image from camera or Gallery
  • Remove background from image & save it
  • Background should be anything black or white
  • Also need to remove shadow along with the background

Result Example:

Original Image

Result Image

This is what i have tried:

CGFloat colorMasking[6]={222,255,222,255,222,255};
CGImageRef imageRef = CGImageCreateWithMaskingColors([IMG CGImage], colorMasking);
UIImage  *resultThumbImage = [UIImage imageWithCGImage:imageRef scale:ThumbImage.scale orientation:IMG.imageOrientation];

Its only work on white background. Its not more effective. I need to achieve exact result what i have placed in above images.

I have also referred some references:

iOS how to mask the image background color

How to remove the background of image in iphone app?

Changing the background color of a captured image from camera to white

Can someone help me to achieve this ?

Any references or will be highly appreciated.

Thanks in Advance.

解决方案

Generally, as a rule of thumb, the more different the background color from all the other colors, the easier it is to split the image into fore- and background. In such a case, as @Chris already suggested, it is possible to use a simple chroma key implementation. Below is my quick implementation of the keying described on Wikipedia (it's written in C++ but translating it to Objective-C should be easy):

/**
 * @brief Separate foreground from background using simple chroma keying.
 *
 * @param imageBGR   Image with monochrome background
 * @param chromaBGR  Color of the background (using channel order BGR and range [0, 255])
 * @param tInner     Inner threshold, color distances below this value will be counted as foreground
 * @param tOuter     Outer threshold, color distances above this value will be counted as background
 *
 * @return  Mask (0 - background, 255 - foreground, [1, 255] - partially fore- and background)
 *
 * Details can be found on [Wikipedia][1].
 *
 * [1]: https://en.wikipedia.org/wiki/Chroma_key#Programming
 */
cv::Mat1b chromaKey( const cv::Mat3b & imageBGR, cv::Scalar chromaBGR, double tInner, double tOuter )
{
    // Basic outline:
    //
    // 1. Convert the image to YCrCb.
    // 2. Measure Euclidean distances of color in YCrBr to chroma value.
    // 3. Categorize pixels:
    //   * color distances below inner threshold count as foreground; mask value = 0
    //   * color distances above outer threshold count as background; mask value = 255
    //   * color distances between inner and outer threshold a linearly interpolated; mask value = [0, 255]

    assert( tInner <= tOuter );

    // Convert to YCrCb.
    assert( ! imageBGR.empty() );
    cv::Size imageSize = imageBGR.size();
    cv::Mat3b imageYCrCb;
    cv::cvtColor( imageBGR, imageYCrCb, cv::COLOR_BGR2YCrCb );
    cv::Scalar chromaYCrCb = bgr2ycrcb( chromaBGR ); // Convert a single BGR value to YCrCb.

    // Build the mask.
    cv::Mat1b mask = cv::Mat1b::zeros( imageSize );
    const cv::Vec3d key( chromaYCrCb[ 0 ], chromaYCrCb[ 1 ], chromaYCrCb[ 2 ] );

    for ( int y = 0; y < imageSize.height; ++y )
    {
        for ( int x = 0; x < imageSize.width; ++x )
        {
            const cv::Vec3d color( imageYCrCb( y, x )[ 0 ], imageYCrCb( y, x )[ 1 ], imageYCrCb( y, x )[ 2 ] );
            double distance = cv::norm( key - color );

            if ( distance < tInner )
            {
                // Current pixel is fully part of the background.
                mask( y, x ) = 0;
            }
            else if ( distance > tOuter )
            {
                // Current pixel is fully part of the foreground.
                mask( y, x ) = 255;
            }
            else
            {
                // Current pixel is partially part both, fore- and background; interpolate linearly.
                // Compute the interpolation factor and clip its value to the range [0, 255].
                double d1 = distance - tInner;
                double d2 = tOuter   - tInner;
                uint8_t alpha = static_cast< uint8_t >( 255. * ( d1 / d2 ) );

                mask( y, x ) = alpha;
            }
        }
    }

    return mask;
}

A fully working code example can be found in this Github Gist.

Unfortunately, your example does not stick to that rule of thumb. Since the foreground and background only vary in intensity it is difficult (or even impossible) to find a single global set of parameters for a good separation:

  1. Black line around the object but no holes inside the object (tInner=50, tOuter=90)

  2. No black line around the object but holes inside the object (tInner=100, tOuter=170)

So, if you cannot change the background of your images a more complicated approach is required. However, a quick and simple example implementation is a bit out of scope, but you may want to look into related areas of image segmentation and alpha matting.

这篇关于从Image&amp;中删除背景在iOS中仅保存图像部分以进行保存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆