需要使用Python和OpenCV制作图片的卡通漫画版本 [英] Need to make a cartoon comic version of a picture with Python and OpenCV

查看:105
本文介绍了需要使用Python和OpenCV制作图片的卡通漫画版本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试创建一个函数,该函数将使任何图像看起来像卡通漫画. 到目前为止,这是我的代码:

I'm trying to make a function which will make any image look like a cartooney comic strip. Here is my code so far:

import numpy
import cv2

__author__ = "Michael Beyeler"
__license__ = "GNU GPL 3.0 or later"

class Cartoonizer:

    def __init__(self):
        self.numDownSamples = 1
        self.numBilateralFilters = 7

    def render(self, img_rgb):

        # downsample image using Gaussian pyramid
        img_color = img_rgb
        for _ in range(self.numDownSamples):
            img_color = cv2.pyrDown(img_color)
        # repeatedly apply small bilateral filter instead of applying
        # one large filter
        for _ in range(self.numBilateralFilters):
            img_color = cv2.bilateralFilter(img_color, 9, 9, 7)
        # upsample image to original size
        for _ in range(self.numDownSamples):
            img_color = cv2.pyrUp(img_color)
        # convert to grayscale and apply bilateral blur
        img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
        for _ in range(self.numBilateralFilters):
            img_gray_blur = cv2.bilateralFilter(img_gray, 9, 9, 7)
        # detect and enhance edges
        img_edge = cv2.adaptiveThreshold(img_gray_blur, 255,
                                         cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
                                         cv2.THRESH_BINARY, 9, 5)
        # convert back to color so that it can be bit-ANDed with color image
        img_edge = cv2.cvtColor(img_edge, cv2.COLOR_GRAY2RGB)
        #Ensure that img_color and img_edge are the same size, otherwise bitwise_and will not work
        height = min(len(img_color), len(img_edge))
        width = min(len(img_color[0]), len(img_edge[0]))
        img_color = img_color[0:height, 0:width]
        img_edge = img_edge[0:height, 0:width]
        return cv2.bitwise_and(img_color, img_edge)

我从这里拿走了它,并保留了许可证,并对其进行了少许修改: http://www.askaswiss.com /2016/01/how-to-create-cartoon-effect-opencv-python.html

I have taken it from here, license preserved, and slightly modified it: http://www.askaswiss.com/2016/01/how-to-create-cartoon-effect-opencv-python.html

我本来是这样的:

这是我的脚本输出的内容:

Here is what my script outputs:

这是我需要的:

到目前为止,我注意到的是:

What I've noticed so far is:

  1. 我用于模糊图像的代码有太多颜色,我需要从浅色到深色的平滑过渡.
  2. 当我的代码创建大量噪点(孤独的"黑点)并分割线时,目标图像的边缘变得平滑,即线条. 我尝试过更改一些参数,添加了几个随机过滤器,但我真的不知道下一步该怎么做.
  1. My code for blurring the image has got too many colors, I need less smooth transition from light colors to dark ones.
  2. The target image has got smooth edges, which are lines, when my code creates a lot of noise ("lonely" black dots) and splits lines. I've tried changing some parameters, added a couple of random filters, but I really have no clue what to do next.

非常感谢您的帮助.

推荐答案

我没有Python代码,它是用MATLAB编写的(使用

I don't have Python code, it's written in MATLAB (using DIPimage 3). But I think you might get some ideas from it. Here is what it does:

1- s是输入图像img的稍微平滑的版本,将用于创建线条.为了进行平滑处理,我​​使用了一个简单的非线性扩散.这样可以保留(甚至增强)边缘.它类似于双边过滤器.

1- s is a slightly smoothed version of the input image img, and will be used for creating the lines. For the smoothing I use a trivial non-linear diffusion. This preserves (even enhances) edges. It is similar to the bilateral filter.

2-使用s,我首先应用Laplacian运算符(该运算符使用高斯导数,参数1.5是高斯的sigma).这类似于高斯人的区别.您的cv2.adaptiveThreshold调用等同于gaussf(img,2)-img.我的拉普拉斯人做的事情与gaussf(img,2)-gaussf(img,1)类似(高斯人的不同).也就是说,此输出中的细节比cv2.adaptiveThreshold中的细节要少.

2- Using s, I first apply the Laplacian operator (this one uses Gaussian derivatives, the parameter 1.5 is the sigma for the Gaussian). This is similar to a difference of Gaussians. Your cv2.adaptiveThreshold call does the equivalent to gaussf(img,2)-img. My Laplacian does something similar to gaussf(img,2)-gaussf(img,1) (a difference of Gaussians). That is, there is somewhat less detail in this output than in the one from cv2.adaptiveThreshold.

3-将拉普拉斯算子应用于彩色图像,因此产生彩色输出.我通过采用最大颜色元素将其转换为灰度值.然后我剪切并拉伸它,基本上执行cv2.adaptiveThreshold的另一半,除了输出不是二进制,而是灰度值.也就是说,有较暗和较浅的线条.更重要的是,这些线条看起来不像是像素化的,因为在每条线条的边缘都有从暗到亮的逐渐变化.我不得不微调这些参数以获得良好的结果.现在,l的图像为1,没有线条,而较低(较暗)的地方有线条.

3- The Laplacian was applied to a color image, so it yields a color output. I convert this to grey-value by taking the max color element. Then I clip and stretch this, essentially doing the other half of what cv2.adaptiveThreshold does, except the output is not binary, but still grey-value. That is, there are darker and lighter lines. More importantly, the lines don't look pixelated because there is a gradual change from dark to light at the edges of each line. I had to tweak these parameters a bit to get a good result. l is now an image that is 1 where there will be no lines, and lower (darker) where there will be lines.

4-现在,我在l附近应用路径.这是一个相当专业的形态运算符,您可能需要付出一些努力才能找到实现.它消除了l中非常短的暗线.这基本上消除了您在点上遇到的问题.我确定还有其他方法可以解决点问题.

4- Now I apply a path closing to l. This is a rather specialized morphological operator, you might have to do some effort to find an implementation. It removes dark lines in l that are very short. This basically gets rid of the problem you had with the dots. I'm sure there are other ways to solve the dot problem.

5-要在行与行之间插入颜色,我们要平滑和量化原始图像.我用更平滑的版本img覆盖了s,我使用在另一个答案中描述的算法对颜色进行了量化.该量化仅留下10种不同的颜色.我进行了一些平滑处理,以避免颜色之间的过渡过于尖锐.

5- To put color in between the lines we want to both smooth and quantize the original image. I overwrite s with a more strongly smoothed version of img, to which I apply color quantization using an algorithm I described in another answer. This quantization leaves only 10 distinct colors. I apply a little bit of smoothing to avoid the too-sharp transition between colors.

6-最后,将彩色图像s和线图像l相乘. l为1时,没有任何变化. l的值较低的地方,s的颜色将变深.这样可以有效地在图像上绘制线条.比您使用的按位和运算符效果更好.

6- Finally, the color image s and the lines image l are multiplied together. Where l was 1, nothing changes. Where l had lower values, s will become darker. This effectively draws the lines on the image. It's a nicer effect than the bitwise and operator that you use.

img = readim('https://i.stack.imgur.com/Zq1f4.jpg');
% Simplify using non-linear diffusion
s = colordiffusion(img,2);
% Find lines -- the positive response of the Laplace operator
l = laplace(s,1.5);
l = tensorfun('immax',l);
l = stretch(clip(l,0.4,4),0,100,1,0);
% Remove short lines
l = pathopening(l,8,'closing','constrained');
% Simplify color image using diffusion and k-means clustering
s = colordiffusion(gaussf(img),5);
s = quantize(s,10,'minvariance');
s = gaussf(s);
% Paint lines on simplified image
out = s * l;

% Color diffusion:
function out = colordiffusion(out,iterations)
   sigma = 0.8;
   K = 10;
   for ii = 1:iterations
      grey = colorspace(out,'grey');
      nabla_out = gradientvector(grey,sigma);
      D = exp(-(norm(nabla_out)/K)^2);
      out = out + divergence(D * nabla_out);
   end
end

这篇关于需要使用Python和OpenCV制作图片的卡通漫画版本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆