透视变换不能在Matlab中计算场景中的正确线 [英] Perspective transform not calculating proper lines on scene in Matlab
问题描述
因此,我试图为单应性写入等效代码 OpenCV中给出的示例。代码很长,但非常简洁,首先它计算一个对象和一个场景(通过webcam)的检测器和描述符。然后,它使用 BruteForce Matcher
进行比较。然后选择最佳匹配并使用它来计算对象和场景的单应性
和之后透视变换
。现在我的问题是透视变换
没有给我一个好的结果。通过透视变换
获得的坐标似乎挂在(0,0)
坐标上。我有一个类似的代码运行在纯OpenCV在eclipse,从中我看到第一个坐标变化时,我移动相机,它不发生。还要注意,计算的单应性
值略有不同。然而,对我来说,代码的逻辑没有问题。但是,矩形区域没有在场景中正确显示。我可以看到不同的线被绘制在场景上,但他们不适合的图像,以及它应该的。也许我需要一组不同的眼睛。感谢。
So , I have trying to write equivalent code for homography example given in OpenCV . The code is pretty long but is very concise , at first it calculates the detectors and descriptors of an object and a scene(via webcam) . It then makes comparison between them using BruteForce Matcher
. It then chooses the best matches and use that to calculate the homography
and afterwards perspective transform
of the object and the scene . Now my problem is that the perspective transform
is not giving me a good result . On of the coordinates obtained by perspective transform
seems to hang around the (0,0)
coordinates . I have a similar code running in pure OpenCV in eclipse , from which I saw that the first coordinate change when I move around the camera , its not happening . Also to note that the homography
value calculated is slightly different . However , to me there is no problem with the logic of the code .But , the rectangular region is not properly shown in the scene. I can see different lines being drawn on the scene,but they are not fitting the image , as well as it should .Perhaps I need a different set of eyes . Thanks.
function hello
disp('Feature matching demo. Press any key when done.');
% Set up camera
camera = cv.VideoCapture;
pause(3); % Necessary in some environment. See help cv.VideoCapture
% Set up display window
window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
setappdata(window,'flag',false);
object = imread('D:/match.jpg');
%Conversion from color to gray
object = cv.cvtColor(object,'RGB2GRAY');
%Declaring detector and extractor
detector = cv.FeatureDetector('SURF');
extractor = cv.DescriptorExtractor('SURF');
%Calculating object keypoints
objKeypoints = detector.detect(object);
%Calculating object descriptors
objDescriptors = extractor.compute(object,objKeypoints);
% Start main loop
while true
% Grab and preprocess an image
im = camera.read;
%im = cv.resize(im,1);
scene = cv.cvtColor(im,'RGB2GRAY');
sceneKeypoints = detector.detect(scene);
%Checking for empty keypoints
if isempty(sceneKeypoints)
continue
end;
sceneDescriptors = extractor.compute(scene,sceneKeypoints);
matcher = cv.DescriptorMatcher('BruteForce');
matches = matcher.match(objDescriptors,sceneDescriptors);
objDescriptRow = size(objDescriptors,1);
dist_arr = zeros(1,objDescriptRow);
for i=1:objDescriptRow
dist_arr(i) = matches(i).distance;
end;
min_dist = min(dist_arr);
N = 10000;
good_matches = repmat(struct('distance',0,'imgIdx',0,'queryIdx',0,'trainIdx',0), N, 1 );
goodmatchesSize = 0;
for i=1:objDescriptRow
if matches(i).distance < 3 * min_dist
good_matches(i).distance = matches(i).distance;
good_matches(i).imgIdx = matches(i).imgIdx;
good_matches(i).queryIdx = matches(i).queryIdx;
good_matches(i).trainIdx = matches(i).trainIdx;
%Recording the number of good matches
goodmatchesSize = goodmatchesSize +1;
end
end
im_matches = cv.drawMatches(object, objKeypoints, scene, sceneKeypoints,good_matches);
objPoints = [];
scnPoints = [];
%Finding the good matches
for i=1:goodmatchesSize
qryIdx = good_matches(i).queryIdx;
trnIdx = good_matches(i).trainIdx;
if qryIdx == 0
continue
end;
if trnIdx == 0
continue
end;
first_point = objKeypoints(qryIdx).pt;
second_point = sceneKeypoints(trnIdx).pt;
objPoints(i,:)= (first_point);
scnPoints(i,:) = (second_point);
end
%Error checking
if length(scnPoints) <=4
continue
end;
if length(scnPoints)~= length(objPoints)
continue
end;
% Finding homography of arrays of two sets of points
H = cv.findHomography(objPoints,scnPoints);
objectCorners = [];
sceneCorners =[];
objectCorners(1,1) = 0.1;
objectCorners(1,2) = 0.1;
objectCorners(2,1) = size(object,2);
objectCorners(2,2) = 0.1;
objectCorners(3,1) = size(object,2);
objectCorners(3,2) = size(object,1);
objectCorners(4,1) = 0.1;
objectCorners(4,2) = size(object,1);
%Transposing the object corners for perpective transform to work
newObj = shiftdim(objectCorners,-1);
%Calculating the perspective tranform
foo =cv.perspectiveTransform(newObj,H);
sceneCorners = shiftdim(foo,1);
offset = [];
offset(1,1) = size(object,2);
offset(1,2)= 0;
outimg = cv.line(im_matches,sceneCorners(1,:)+offset,sceneCorners(2,:)+offset);
outimg = cv.line(outimg,sceneCorners(2,:)+offset,sceneCorners(3,:)+offset);
outimg = cv.line(outimg,sceneCorners(3,:)+offset,sceneCorners(4,:)+offset);
outimg = cv.line(outimg,sceneCorners(4,:)+offset,sceneCorners(1,:)+offset);
imshow(outimg);
% Terminate if any user input
flag = getappdata(window,'flag');
if isempty(flag)||flag, break; end
pause(0.000000001);
end
% Close
close(window);
end
推荐答案
需要一个完整的单应性?对于该应用,仿射或甚至相似变换(dx,dy,scale和rotation)可能就足够了。更有约束的转换在噪声的存在下会更好。
Do you need a full homography? For this application an affine or even a similarity transformation (dx, dy, scale, and rotation) may be sufficient. A more constrained transformation will work better in the presence of noise.
这篇关于透视变换不能在Matlab中计算场景中的正确线的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!