如何使用Python用新图像替换图像中的轮廓(矩形)? [英] How to replace a contour (rectangle) in an image with a new image using Python?

查看:917
本文介绍了如何使用Python用新图像替换图像中的轮廓(矩形)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用opencv(CV2)和Python Pillow图像库来尝试拍摄任意手机的图像并用新图像替换屏幕。我已经到了可以拍摄图像并识别手机屏幕并获得角落所有坐标的地步,但是我很难用新图像替换图像中的那个区域。 / p>

到目前为止我的代码:

  import cv2 
来自PIL导入图片

image = cv2.imread('mockup.png')
edged_image = cv2.Canny(image,30,200)

( contours,_)= cv2.findContours(edged_image.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours,key = cv2.contourArea,reverse = True)[:10]
screenCnt =无

等高线轮廓:
peri = cv2.arcLength(轮廓,True)
约= cv2.approxPolyDP(轮廓,0.02 * peri,True)

#如果我们的近似轮廓有四个点,那么
#我们可以假设我们已经找到我们的屏幕
如果len(约)== 4:
screenCnt =大约
突破

cv2.drawContou rs(image,[screenCnt],-1,(0,255,0),3)
cv2.imshow(Screen Location,image)
cv2.waitKey(0)

这将给我一个如下图像:



我还可以使用以下代码行获取屏幕角落的坐标:

  screenCoords = [x [0] .tolist()for screen in screenCnt] 
// [[398,139],[245,258],[474,487],[628,358]]

然而,我无法弄清楚我的生活如何拍摄新图像并进行缩放进入我找到的坐标空间的形状并将图像覆盖在顶部。



我的猜测是我可以使用此功能在Pillow中使用图像变换我改编自



如果我使用垂直非常高的不同图像我最终得到的东西太长了:





我是否需要应用额外的转换来缩放图像?不知道这里发生了什么,我认为透视变换会使图像自动缩小到提供的坐标。

解决方案

我下载了您的图像数据并在我的本地计算机中重现问题以找出解决方案。还下载 lenna.png 以适应手机屏幕。

  import cv2 
导入numpy为np

#iPhone模板图片
img1 = cv2.imread(/ Users / anmoluppal / Downloads / 46F1U.jpg)
#用于拟合白腔的样本图像
img2 = cv2.imread(/ Users / anmoluppal / Downloads / Lenna.png)

rows,cols,ch = img1.shape

#硬编码用绿色矩形标记的白洞的3个角点。
pts1 = np.float32([[201,561],[455,279],[742,985]])
#硬编码要拟合的参考图像上的相同点。
pts2 = np.float32([[0,0],[512,0],[0,512]])

#获取仿射变换从样本图像到模板。
M = cv2.getAffineTransform(pts2,pts1)

#应用转换时,请注意传递的(cols,rows),这些定义了Transformation之后输出的最终尺寸。
dst = cv2.warpAffine(img2,M,(cols,rows))

#仅用于调试输出。
final = cv2.addWeighted(dst,0.5,img1,0.5,1)
cv2.imwrite(./ garbage.png,final)


I'm currently using the opencv (CV2) and Python Pillow image library to try and take an image of arbitrary phones and replace the screen with a new image. I've gotten to the point where I can take an image and identify the screen of the phone and get all the coordinates for the corner, but I'm having a really hard time replacing that area in the image with a new image.

The code I have so far:

import cv2
from PIL import Image

image = cv2.imread('mockup.png')
edged_image = cv2.Canny(image, 30, 200)

(contours, _) = cv2.findContours(edged_image.copy(), cv2.RETR_TREE,     cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)[:10]
screenCnt = None

for contour in contours:
    peri = cv2.arcLength(contour, True)
    approx = cv2.approxPolyDP(contour, 0.02 * peri, True)

    # if our approximated contour has four points, then
    # we can assume that we have found our screen
    if len(approx) == 4:
        screenCnt = approx
        break

cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 3)
cv2.imshow("Screen Location", image)
cv2.waitKey(0)

This will give me an image that looks like this:

I can also get the coordinates of the screen corners using this line of code:

screenCoords = [x[0].tolist() for x in screenCnt] 
// [[398, 139], [245, 258], [474, 487], [628, 358]]

However I can't figure out for the life of me how to take a new image and scale it into the shape of the coordinate space I've found and overlay the image ontop.

My guess is that I can do this using an image transform in Pillow using this function that I adapted from this stackoverflow question:

def find_transform_coefficients(pa, pb):
"""Return the coefficients required for a transform from start_points to end_points.

    args:
        start_points -> Tuple of 4 values for start coordinates
        end_points --> Tuple of 4 values for end coordinates
"""
matrix = []
for p1, p2 in zip(pa, pb):
    matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0]*p1[0], -p2[0]*p1[1]])
    matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1]*p1[0], -p2[1]*p1[1]])

A = numpy.matrix(matrix, dtype=numpy.float)
B = numpy.array(pb).reshape(8)

res = numpy.dot(numpy.linalg.inv(A.T * A) * A.T, B)
return numpy.array(res).reshape(8) 

However I'm in over my head a bit, and I can't get the details right. Could someone give me some help?

EDIT

Ok now that I'm using the getPerspectiveTransform and warpPerspective functions, I've got the following additional code:

screenCoords = numpy.asarray(
    [numpy.asarray(x[0], dtype=numpy.float32) for x in screenCnt],
    dtype=numpy.float32
)

overlay_image = cv2.imread('123.png')
overlay_height, overlay_width = image.shape[:2]

input_coordinates = numpy.asarray(
    [
        numpy.asarray([0, 0], dtype=numpy.float32),
        numpy.asarray([overlay_width, 0], dtype=numpy.float32),
        numpy.asarray([overlay_width, overlay_height],     dtype=numpy.float32),
        numpy.asarray([0, overlay_height], dtype=numpy.float32)
    ],
    dtype=numpy.float32,
)

transformation_matrix = cv2.getPerspectiveTransform(
    numpy.asarray(input_coordinates),
    numpy.asarray(screenCoords),
)

warped_image = cv2.warpPerspective(
    overlay_image,
    transformation_matrix,
    (background_width, background_height),
)
cv2.imshow("Overlay image", warped_image)
cv2.waitKey(0)

The image is getting rotated and skewed properly (I think), but its not the same size as the screen. Its "shorter":

and if I use a different image that is very tall vertically I end up with something that is too "long":

Do I need to apply an additional transformation to scale the image? Not sure whats going on here, I thought the perspective transform would make the image automatically scale out to the provided coordinates.

解决方案

I downloaded your image data and reproduced the problem in my local machine to find out the solution. Also downloaded lenna.png to fit inside the phone screen.

import cv2
import numpy as np

# Template image of iPhone
img1 = cv2.imread("/Users/anmoluppal/Downloads/46F1U.jpg")
# Sample image to be used for fitting into white cavity
img2 = cv2.imread("/Users/anmoluppal/Downloads/Lenna.png")

rows,cols,ch = img1.shape

# Hard coded the 3 corner points of white cavity labelled with green rect.
pts1 = np.float32([[201, 561], [455, 279], [742, 985]])
# Hard coded the same points on the reference image to be fitted.
pts2 = np.float32([[0, 0], [512, 0], [0, 512]])

# Getting affine transformation form sample image to template.
M = cv2.getAffineTransform(pts2,pts1)

# Applying the transformation, mind the (cols,rows) passed, these define the final dimensions of output after Transformation.
dst = cv2.warpAffine(img2,M,(cols,rows))

# Just for Debugging the output.
final = cv2.addWeighted(dst, 0.5, img1, 0.5, 1)
cv2.imwrite("./garbage.png", final)

这篇关于如何使用Python用新图像替换图像中的轮廓(矩形)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆