如何使用Flann匹配之间的关系来确定合理的单应性? [英] How do I use the relationships between Flann matches to determine a sensible homography?

查看:560
本文介绍了如何使用Flann匹配之间的关系来确定合理的单应性?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一张全景图像,以及在该全景图像中看到的较小的建筑物图像。我想要做的是识别那个较小图像中的建筑物是否在该全景图像中,以及两个图像如何对齐。



对于第一个例子,我' m使用我的全景图像的裁剪版本,因此像素是相同的。

  import cv2 
import matplotlib.pyplot如plt
导入matplotlib.image为mpimg
导入numpy为np
导入数学

#加载图像
cwImage = cv2.imread('cw1。 jpg',0)
panImage = cv2.imread('pan1.jpg',0)

#准备SURF图像分析
surf = cv2.xfeatures2d.SURF_create(4000 )

#查找两个图像的关键点和点描述符
cwKeypoints,cwDescriptors = surf.detectAndCompute(cwImage,None)
panKeypoints,panDescriptors = surf.detectAndCompute(panImage,None)





然后我使用OpenCV的FlannBasedMatcher来查找两个图像之间的良好匹配:

  FLANN_INDEX_KDTREE = 0 
index_params = dict(algorithm = FLANN_INDEX_KDTREE,trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params,search_params)

#查找描述符之间的匹配项
matches = flann.knnMatch(cwDescriptors,panDescriptors,k = 2)

good = []

for m,n in matches:
if m.distance < 0.7 * n.distance:
good.append(m)



所以你可以看到,在这个例子中,它完全匹配图像之间的点。那么我找到单应性并应用透视扭曲:

  cwPoints = np.float32([cwKeypoints [m.queryIdx] .pt for m in good 
])。reshape(-1,1,2)
panPoints = np.float32([panKeypoints [m.trainIdx] .pt for m in good
] ).reshape(-1,1,2)
h,status = cv2.findHomography(cwPoints,panPoints)

warpImage = cv2.warpPerspective(cwImage,h,(panImage.shape [1 ],panImage.shape [0]))



我正在寻找的是在检测好匹配和调用 findHomography 之间缺少一步,在那里我可以查看匹配之间的关系,并确定哪些匹配是因此正确。



我想知道OpenCV中是否有一个函数帽子我应该看看这一步,或者如果这是我需要自己解决的问题,如果是这样我应该怎么做呢?

解决方案

我去年写了一篇关于在场景中寻找对象的博客(2017.11.11)。也许有帮助。链接在这里。



场景中找到的对象:








代码:

 #!/ usr / bin / python3 
#2017.11.11 01:44:37 CST
# 2017.11.12 00:09:14 CST

使用Sift特征点检测和匹配查找场景中特定物体。


import cv2
import numpy as np
MIN_MATCH_COUNT = 4

imgname1 =box.png
imgname2 =box_in_scene.png

##(1)准备数据
img1 = cv2.imread(imgname1)
img2 = cv2.imread(imgname2)
gray1 = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2,cv2.COLOR_BGR2GRAY)


##(2)创建SIFT对象
sift = cv2.xfeatures2d.SIFT_create()

##(3)创建flann matcher
matcher = cv2.FlannBasedMatcher(dict(algorithm = 1,trees = 5),{})

##( 4)检测关键点并计算密钥指针描述符
kpts1,descs1 = sift.detectAndCompute(gray1,None)
kpts2,descs2 = sift.detectAndCompute(gray2,None)

## (5)knnMatch获取Top2
匹配= matcher.knnMatch(descs1,descs2,2)
#按距离排序。
匹配=已排序(匹配,键= lambda x:x [0] .distance)

##(6)比率测试,以获得良好匹配。
good = [m1 for(m1,m2)匹配,如果m1.distance< 0.7 * m2.distance]

canvas = img2.copy()

##(7)找单应矩阵
##当有足够的健壮匹配点对(至少4个)时
如果len(好)> MIN_MATCH_COUNT:
##从匹配中提取出对应点对
##(queryIndex为小对象,trainIndex为场景)
src_pts = np.float32([kpts1 [m.queryIdx] .pt for m in good])。reshape(-1,1,2)
dst_pts = np.float32([kpts2 [ m.trainIdx] .pt for m in good])。reshape(-1,1,2)
##在cv2.RANSAC中使用好的匹配点找到单应矩阵
M,mask = cv2.findHomography (src_pts,dst_pts,cv2.RANSAC,5.0)
##渗模,用作绘制计算单应性矩阵时用到的点对
#matchesMask2 = mask.ravel()。tolist()
##计算图1的畸变,也就是在图2中的对应的位置。
h,w = img1.shape [:2]
pts = np.float32([[0 ,0],[0,h-1],[w-1,h-1],[w-1,0]])。重塑(-1,1,2)
dst = cv2.perspectiveTransform (pts,M)
##绘制边
cv2.polylines(canvas,[np.int32(dst)],True,(0,255,0),3,cv2.LINE_AA)
else:
print(匹配不够找到 - {} / {}。format(len(good),MIN_MATCH_COUNT))


##(8)drawMatches
matched = cv2.drawMatches(img1,kpts1 ,canvas,kpts2,good,None)#,** draw_params)

##(9)从场景中裁剪匹配的区域
h,w = img1.shape [:2]
pts = np.float32([[0,0],[0,h-1],[w-1,h-1],[w-1,0]])。reshape(-1,1, 2)
dst = cv2.perspectiveTransform(pts,M)
perspectiveM = cv2.getPerspectiveTransform(np.float32(dst),pts)
found = cv2.warpPerspective(img2,perspectiveM,( w,h))

##(10)保存并显示
cv2.imwrite(matched.png,匹配)
cv2.imwrite(found.png ,发现)
cv2.imshow(匹配,匹配);
cv2.imshow(找到,找到);
cv2.waitKey(); cv2.destroyAllWindows()


I have a panorama image, and a smaller image of buildings seen within that panorama image. What I want to do is recognise if the buildings in that smaller image are in that panorama image, and how the 2 images line up.

For this first example, I'm using a cropped version of my panorama image, so the pixels are identical.

import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math

# Load images
cwImage = cv2.imread('cw1.jpg',0)
panImage = cv2.imread('pan1.jpg',0)

# Prepare for SURF image analysis
surf = cv2.xfeatures2d.SURF_create(4000)

# Find keypoints and point descriptors for both images
cwKeypoints, cwDescriptors = surf.detectAndCompute(cwImage, None)
panKeypoints, panDescriptors = surf.detectAndCompute(panImage, None)

Then I use OpenCV's FlannBasedMatcher to find good matches between the two images:

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)

# Find matches between the descriptors
matches = flann.knnMatch(cwDescriptors, panDescriptors, k=2)

good = []

for m, n in matches:
  if m.distance < 0.7 * n.distance:
    good.append(m)

So you can see that in this example, it perfectly matches the points between images. So then I find the homography, and apply a perspective warp:

cwPoints = np.float32([cwKeypoints[m.queryIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
panPoints = np.float32([panKeypoints[m.trainIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
h, status = cv2.findHomography(cwPoints, panPoints)

warpImage = cv2.warpPerspective(cwImage, h, (panImage.shape[1], panImage.shape[0]))

Result is that it perfectly places the smaller image within the larger image.

Now, I want to do this where the smaller image isn't a pixel-perfect version of the larger image.

For the new smaller image, the keypoints look like this:

You can see that in some cases, it matches correctly, and in some cases it doesn't.

If I call findHomography with these matches, it's going to take all of these data points into account and come up with a non-sensical warp perspective, because it's basing it on the correct matches and the incorrect matches.

What I'm looking for is a missing step in between detecting the good matches, and calling findHomography, where I can look at the relationship between the matches, and determine which matches are therefore correct.

I'm wondering if there's a function within OpenCV that I should be looking at for this step, or if this is something I'll need to work out on my own, and if so how I should go about doing that?

解决方案

I wrote a blog in about finding object in scene last year( 2017.11.11). Maybe it helps. Here is the link. https://zhuanlan.zhihu.com/p/30936804

Env: OpenCV 3.3 + Python 3.5


Found matches:

The found object in the scene:


The code:

#!/usr/bin/python3
# 2017.11.11 01:44:37 CST
# 2017.11.12 00:09:14 CST
"""
使用Sift特征点检测和匹配查找场景中特定物体。
"""

import cv2
import numpy as np
MIN_MATCH_COUNT = 4

imgname1 = "box.png"
imgname2 = "box_in_scene.png"

## (1) prepare data
img1 = cv2.imread(imgname1)
img2 = cv2.imread(imgname2)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)


## (2) Create SIFT object
sift = cv2.xfeatures2d.SIFT_create()

## (3) Create flann matcher
matcher = cv2.FlannBasedMatcher(dict(algorithm = 1, trees = 5), {})

## (4) Detect keypoints and compute keypointer descriptors
kpts1, descs1 = sift.detectAndCompute(gray1,None)
kpts2, descs2 = sift.detectAndCompute(gray2,None)

## (5) knnMatch to get Top2
matches = matcher.knnMatch(descs1, descs2, 2)
# Sort by their distance.
matches = sorted(matches, key = lambda x:x[0].distance)

## (6) Ratio test, to get good matches.
good = [m1 for (m1, m2) in matches if m1.distance < 0.7 * m2.distance]

canvas = img2.copy()

## (7) find homography matrix
## 当有足够的健壮匹配点对(至少4个)时
if len(good)>MIN_MATCH_COUNT:
    ## 从匹配中提取出对应点对
    ## (queryIndex for the small object, trainIndex for the scene )
    src_pts = np.float32([ kpts1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
    dst_pts = np.float32([ kpts2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
    ## find homography matrix in cv2.RANSAC using good match points
    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
    ## 掩模,用作绘制计算单应性矩阵时用到的点对
    #matchesMask2 = mask.ravel().tolist()
    ## 计算图1的畸变,也就是在图2中的对应的位置。
    h,w = img1.shape[:2]
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)
    ## 绘制边框
    cv2.polylines(canvas,[np.int32(dst)],True,(0,255,0),3, cv2.LINE_AA)
else:
    print( "Not enough matches are found - {}/{}".format(len(good),MIN_MATCH_COUNT))


## (8) drawMatches
matched = cv2.drawMatches(img1,kpts1,canvas,kpts2,good,None)#,**draw_params)

## (9) Crop the matched region from scene
h,w = img1.shape[:2]
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
perspectiveM = cv2.getPerspectiveTransform(np.float32(dst),pts)
found = cv2.warpPerspective(img2,perspectiveM,(w,h))

## (10) save and display
cv2.imwrite("matched.png", matched)
cv2.imwrite("found.png", found)
cv2.imshow("matched", matched);
cv2.imshow("found", found);
cv2.waitKey();cv2.destroyAllWindows()

这篇关于如何使用Flann匹配之间的关系来确定合理的单应性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆