OpenCV:了解warpPerspective/透视变换 [英] OpenCV: Understanding warpPerspective / perspective transform
问题描述
我为自己制作了一个使用OpenCVs wrapPerspective的小例子,但是输出并不完全符合我的预期.
I made a small example for myself to play around with OpenCVs wrapPerspective, but the output is not completely as I expected.
我的输入是45度角的条形.我想对其进行变换,以使其垂直对齐/成90°角.没问题.但是,我不明白的是,实际目的地周围的一切都是黑色的.我不明白的原因是,实际上只有转换矩阵传递给wrapPerspective函数,而不是目的地指向自身.因此,我的预期输出将是90度角的条形图,并且周围的大部分是黄色而不是黑色.我的推理错误在哪里?
My input is a bar at an 45° angle. I want to transform it so that it's vertically aligned / at an 90° angle. No problem with that. However, what I don't understand is that everything around the actual destination points is black. The reason I don't understand this is, that actually only the transformation matrix gets passed to the wrapPerspective function, not the destination points themselves. So my expected output would be a bar at an 90° angle and most around it to be yellow instead of black. Where's my error in reasoning?
# helper function
def showImage(img, title):
fig = plt.figure()
plt.suptitle(title)
plt.imshow(img)
# read and show test image
img = mpimg.imread('test_transform.jpg')
showImage(img, "input image")
# source points
top_left = [194,430]
top_right = [521,103]
bottom_right = [549,131]
bottom_left = [222,458]
pts = np.array([bottom_left,bottom_right,top_right,top_left])
# target points
y_off = 400; # y offset
top_left_dst = [top_left[0], top_left[1] - y_off]
top_right_dst = [top_left_dst[0] + 39.6, top_left_dst[1]]
bottom_right_dst = [top_right_dst[0], top_right_dst[1] + 462.4]
bottom_left_dst = [top_left_dst[0], bottom_right_dst[1]]
dst_pts = np.array([bottom_left_dst, bottom_right_dst, top_right_dst, top_left_dst])
# generate a preview to show where the warped bar would end up
preview=np.copy(img)
cv2.polylines(preview,np.int32([dst_pts]),True,(0,0,255), 5)
cv2.polylines(preview,np.int32([pts]),True,(255,0,255), 1)
showImage(preview, "preview")
# calculate transformation matrix
pts = np.float32(pts.tolist())
dst_pts = np.float32(dst_pts.tolist())
M = cv2.getPerspectiveTransform(pts, dst_pts)
# wrap image and draw the resulting image
image_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, dsize = image_size, flags = cv2.INTER_LINEAR)
showImage(warped, "warped")
使用此代码的结果是:
这是我的输入图像test_transform.jpg:这是添加了坐标的同一张图片:
Here's my input image test_transform.jpg: And here is the same image with coordinates added:
根据要求,这是转换矩阵:
By request, here is the transformation matrix:
[[ 6.05504680e-02 -6.05504680e-02 2.08289910e+02]
[ 8.25714275e+00 8.25714275e+00 -5.12245707e+03]
[ 2.16840434e-18 3.03576608e-18 1.00000000e+00]]
推荐答案
您在数组中的排序或其位置可能是错误的.检查此变换后的图像:dst_pts数组是:np.array([[196,492],[233,494],[234,32],[196,34]]),那多少有点像预览图像中的蓝色矩形.(我自己做了坐标,以确保它们是正确的)注意:您的起点和终点应该顺序正确
Your ordering in your arrays or their positions might be the fault. Check this Transformed Image: The dst_pts array is: np.array([[196,492],[233,494],[234,32],[196,34]]), thats more or less like the blue rectangle in your preview image.(I made the coordinates myself to make sure they are right) NOTE: Your source and destination points should be in right order
这篇关于OpenCV:了解warpPerspective/透视变换的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!