如何使用matchTemplate对齐基于共同特征的两个图像 [英] How to align two images based on a common feature with matchTemplate

查看:196
本文介绍了如何使用matchTemplate对齐基于共同特征的两个图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个重叠的图像.我想对齐这两个图像.我当前的方法是在两个图像中找到一个共同的特征(标记).然后,我想根据特征重叠的位置对齐这两个图像.

I have two images which overlap. I'd like to align these two images. My current approach is to find a common feature (a marking) in both images. I'd then like to align these two images according to the place where the feature overlaps.

图像不是完美的,所以我正在寻找某种方式来根据最佳"拟合(最大重叠)进行对齐.最初,我尝试通过SIFT使用特征匹配来对齐图像,但是特征匹配通常不正确/太少.

The images aren't perfect, so I'm looking for some way that will align based the 'best' fit (most overlap). Originally I tried to align the images using feature matching through SIFT but the features matches were often incorrect/too few.

这是我用来查找模板的代码:

Here's the code I used to find the template:

template = cv2.imread('template.png', 0)
template = template - cv2.erode(template, None)

image1 = cv2.imread('Image to align1.png')
image2 = cv2.imread('Image to align2.png')
image = image2
img2 = image[:,:,2]
img2 = img2 - cv2.erode(img2, None)

ccnorm = cv2.matchTemplate(img2, template, cv2.TM_CCORR_NORMED)
print(ccnorm.max())
loc = np.where(ccnorm == ccnorm.max())
print(loc)
threshold = 0.1
th, tw = template.shape[:2]
for pt in zip(*loc[::-1]):
    if ccnorm[pt[::-1]] < threshold:
        continue
    cv2.rectangle(image, pt, (pt[0] + tw, pt[1] + th),
                 (0, 0, 255), 2)

以下是匹配的功能, 1

Here are the matched features, 1 and 2. Thanks in advance.

推荐答案

您在OpenCV库中的选择是使用多种方法选择一些点,并使用函数在图像中这些点之间创建转换.像 getAffineTransform warpPerspective 将第二张图像变形为第一张图像的坐标(反之亦然).

Your choices with the OpenCV library is to use any number of methods to select a few points, and create the transformation between those points in the image by using a function like getAffineTransform or getPerspectiveTransform. Note that functions like these take points as arguments, not luminosity values (images). You'll want to find points of interest in the first image (say, those marker spots); and you'll want to find those same points in the second image, and pass those pixel locations to a function like getAffineTransform or getPerspectiveTransform. Then, once you have that transformation matrix, you can use warpAffine or warpPerspective to warp the second image into the coordinates of the first (or vice versa).

仿射转换包括平移,旋转,缩放和剪切. Perspective 转换包括仿射转换以及xy方向上的透视变形的所有内容.对于getAffineTransform,您需要从第一张图像发送三对点,以及这三个相同像素在第二张图像中的位置.对于getPerspectiveTransform,您将从每个图像发送四个像素对.如果要使用所有标记点,则可以使用 findHomography 代替,它可以让您放置超过四个点的 ,并且可以计算出所有匹配点之间的最佳单应性.

Affine transformations include translation, rotation, scaling, and shearing. Perspective transformations include everything from affine transformations, as well as perspective distortion in the x and y directions. For getAffineTransform you need to send three pairs of points from the first image, and where those three same pixels are located in the second image. For getPerspectiveTransform, you will send four pixel pairs from each image. If you want to use all of your marker points, you can use findHomography instead which will allow you to place more than four points and it will compute an optimal homography between all of your matched points.

在使用特征检测和匹配来对齐图像时,它会在后台使用这些功能.不同之处在于它可以为您找到功能.但是,如果这行不通,只需使用手动方法找到自己喜欢的特征,然后在这些特征点上使用这些方法.例如,您可以找到已经存在的模板位置,并将其定义为关注区域(ROI),然后将标记点分成较小的模板,并在ROI中找到这些位置.这样,您就可以从两个图像中获得相应的成对点.您可以将它们的位置输入findHomography,或者只选择三个与getAffineTransform一起使用,或四个与getPerspectiveTransform一起使用,就可以进行图像转换,然后进行应用.

When you use feature detection and matching to align images, it's using these functions in the background. The difference is it finds the features for you. But if that's not working, simply use manual methods to find the features to your liking, and then use these methods on those feature points. E.g., you could find the template locations as you already have and define that as a region of interest (ROI), and then break the marker points into smaller template pieces and find those locations inside your ROI. Then you have corresponding pairs of points from both images; you can input their locations into findHomography or just pick three to use with getAffineTransform or four with getPerspectiveTransform and you'll get your image transformation which you can then apply.

否则,您将需要使用 Lukas-Kanade光流算法,如果您不想使用基于特征的方法,则可以进行直接图像匹配,但是相对于选择一些特征点和查找单应性而言,这些方法的速度非常慢您使用整个图像的方式.但是,如果您只需要为几张图像做图像,那并不是什么大不了的事情.为了更准确并使其收敛更快,如果您可以提供一个起始单应性至少可以将其大致平移到正确的位置(例如,您进行特征检测,请注意该特征大致为(x', y')像素中的第二个像素中的第一个像素,并使用该翻译创建单应性.

Otherwise you'll need to use something like the Lukas-Kanade optical flow algorithm which can do direct image matching if you don't want to use feature-based methods, but these are incredibly slow comparatively to selecting a few feature points and finding homographies that way if you use the whole image. However if you only have to do it for a few images, it's not such a huge deal. To be more accurate and have it converge much faster, it'll help if you can provide it a starting homography that at least translates it roughly to the right position (e.g. you do your feature detection, see that the feature is roughly (x', y') pixels in the second image from the first, and create an homography with that translation).

如果您想尝试,也可以从Lucas-Kanade逆合成算法等在线找到一些用于单应性估计的Python例程.我也有针对该算法的自定义例程,但是我无法共享它,但是,如果您共享没有边界框的原始图像,则可以在您的图像上运行该算法,以便为您提供一些估计的单应性以进行比较与.

You can also likely find some Python routines for homography estimation from the Lucas-Kanade inverse compositional algorithm or the like online if you want to try that. I have my own custom routine for that algorithm as well, but I can't share it, however, I could run the algorithm on your images if you share the originals without the bounding boxes, to maybe provide you with some estimated homographies to compare with.

这篇关于如何使用matchTemplate对齐基于共同特征的两个图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆