在Matlab中计算opencv的透视变换时出错 [英] Error in calculating perspective transform for opencv in Matlab

查看:2139
本文介绍了在Matlab中计算opencv的透视变换时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试重新编码 MEX函数接受点数作为大小< c $ c> 1xNx2 (例如: cat(3,[x1,x2,...],[y1,y2,...]) {[x1,y1],[x2,y2])的 N ,...} )。在这种情况下,我不确定你的代码是否正确打包点,无论如何它可以做得更简单..




这是从C ++翻译的完整接受一堆选项。此外,作为建议的文档,强力匹配器其中Hamming距离是ORB描述符的推荐匹配器。



最后注意,我指定了 RANSAC 鲁棒算法作为用于计算单应性矩阵的方法;查看您提供的屏幕截图,您可以看到异常值匹配不正确地指向黑色计算机视觉书在现场。 RANSAC方法的优点在于,即使在数据中存在大量离群值时,它也可以精确地执行估计。 findHomography 是使用所有可用的点。



此外,注意用于估计你的单应性的一些控制点几乎是线性,这可能会对计算造成严重的影响(类似于如何数值反转接近单数的矩阵是一个坏主意)。



下面的代码的相关部分,使用ORB描述符给我好的结果(其余与以前发布的不变):

 %检测关键点并使用ORB计算描述符
[keyObj,featObj] = cv.ORB(imgObj);
[keyScene,featScene] = cv.ORB(imgScene);

%匹配描述符使用强力与汉明距离
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
m = matcher.match(featObj,featScene);

%只保留good匹配(其距离小于k * min_dist)
dist = [m.distance];
m = m(dist <3 * min(dist));



我注意到你忽略了我通过删除匹配的过滤掉的最后一个部分。你可以总是看看距离,并且确定适当的阈值。这是我最初的:

  hist([m.distance])
title ')



您还可以根据原始关键点的响应值对原始关键点应用类似的过程,并相应地对点进行二次采样:

 子图(121),hist([keyObj.response]); title('box')
subplot(122),hist([keyScene.response]); title('scene')

HTH


I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab .

My code in Matlab using OpenCV toolbox:

function hello

    close all;clear all;

    disp('Feature matching demo, press key when done');

    boxImage = imread('D:/pic/500_1.jpg');

    boxImage = rgb2gray(boxImage);

    [boxPoints,boxFeatures] = cv.ORB(boxImage);

    sceneImage = imread('D:/pic/100_1.jpg');

    sceneImage = rgb2gray(sceneImage);

    [scenePoints,sceneFeatures] = cv.ORB(sceneImage);

    if (isempty(scenePoints)|| isempty(boxPoints)) 
        return;
    end;


    matcher = cv.DescriptorMatcher('BruteForce');
    matches = matcher.match(boxFeatures,sceneFeatures);


    %Box contains pixels coordinates where there are matches
    box = [boxPoints([matches(2:end).queryIdx]).pt];

    %Scene contains pixels coordinates where there are matches
    scene = [scenePoints([matches(2:end).trainIdx]).pt];

    %Please refer to http://stackoverflow.com/questions/4682927/matlab-using-mat2cell

    %Box arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function
    [nRows, nCols] = size(box);
    nSubCols = 2;
    box = mat2cell(box,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Scene arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function

    [nRows, nCols] = size(scene);
    nSubCols = 2;
    scene = mat2cell(scene,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Finding homography between box and scene
    H = cv.findHomography(box,scene);

    boxCorners = [1, 1;...                           % top-left
        size(boxImage, 2), 1;...                 % top-right
        size(boxImage, 2), size(boxImage, 1);... % bottom-right
        1, size(boxImage, 1)];

  %Fine until this point , problem starts with perspectiveTransform   
  sceneCorners= cv.perspectiveTransform(boxCorners,H); 

end

The error:

    Error using cv.perspectiveTransform
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp:1926:
error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F)

..

Error in hello (line 58)
  sceneCorners= cv.perspectiveTransform(boxCorners,H);

The problem starts from checking out the perspectiveTranform(boxCorners, H), until finding homography it was fine . Also note that while calculating the matching coordinates from the sample and the scene , I indexed from 2:end, box = [boxPoints([matches(2:end).queryIdx]).pt], since accessing the queryIdx of the 1st element would yield the zeroth position that couldn't be accessed . However , I think , this would not be a problem . Anyhow , I am looking forward for an answer to my solution . Thanks.

PS:This is an edited version of my original post here . The solution I received below ,was not adequate enough , and the bug kept recurring .

2nd Update:

According to @Amro , I have updated my code ,below . The inliers gives good response , however the coordinates for calculating perspective transform somehow got twisted.

function hello
    close all; clear all; clc;

    disp('Feature matching with ORB');

    %Feature detector and extractor for object
    imgObj = imread('D:/pic/box.png');
    %boxImage = rgb2gray(boxImage);
    [keyObj,featObj] = cv.ORB(imgObj);

    %Feature detector and extractor for scene
    imgScene = imread('D:/pic/box_in_scene.png');
    %sceneImage = rgb2gray(sceneImage);
    [keyScene,featScene] = cv.ORB(imgScene);

    if (isempty(keyScene)|| isempty(keyObj)) 
        return;
    end;

    matcher = cv.DescriptorMatcher('BruteForce-HammingLUT');
    m = matcher.match(featObj,featScene);

    %im_matches = cv.drawMatches(boxImage, boxPoints, sceneImage, scenePoints,m);

    % extract keypoints from the filtered matches
    % (C zero-based vs. MATLAB one-based indexing)
    ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
    ptsObj = num2cell(ptsObj, 2);
    ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
    ptsScene = num2cell(ptsScene, 2);

    % compute homography
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

    % remove outliers reported by RANSAC
    inliers = logical(inliers);
    m = m(inliers);

    % show the final matches
    imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
    imshow(imgMatches);

    % apply the homography to the corner points of the box
    [h,w] = size(imgObj);
    corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
    p = cv.perspectiveTransform(corners, H)
    p = permute(p, [2 3 1])
    p = bsxfun(@plus, p, [size(imgObj,2) 0]);

    % draw lines between the transformed corners (the mapped object)
    opts = {'Color',[0 255 0], 'Thickness',4};
    imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
    imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
    imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
    imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
    imshow(imgMatches)
    title('Matches & Object detection')

end

The output is fine , however , the perspectiveTranform is not giving the right coordinates apropos to the problem . My output thus far :

3rd Update:

I have got all of the code running and fine with the homography . However , a corner case is bugging me really hard . If I do imgObj = imread('D:/pic/box.png') and imgScene = imread('D:/pic/box_in_scene.png') , I get the homography rectangle good and fine , however , when I do imgScene = imread('D:/pic/box.png') , i.e the object and scene are the same , I get this error -

Error using cv.findHomography
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\calib3d\src\fundam.cpp:1074:
error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() ==
points2.type()

..

Error in hello (line 37)
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

Now , I have came across this error in the past , this happens when the number of ptsObj or ptsScene is low , e.g, when the scene is nothing but a white/black screen , keypoints of that scene is zero . In this particular problem there is ample amount of ptsObj and ptsScene. Where can the problem lie . I have tested this code using SURFan the same error is resurfacing .

解决方案

A couple of remarks:

  • the matcher returns zero-based indices (as well as various other functions on account of OpenCV being implemented in C++). So if you want to get the corresponding keypoints you have to adjust by one (MATLAB arrays are one-based). mexopencv intentionally does not automatically adjust for this.

  • The cv.findHomography MEX-function accepts points either as a numeric array of size 1xNx2 (e.g: cat(3, [x1,x2,...], [y1,y2,...])) or as an N-sized cell array of two-element vectors each (i.e {[x1,y1], [x2,y2], ...}). In this case, I'm not sure your code is packing the points correctly, either way it could be made much simpler..

Here is the complete demo translated from C++ to MATLAB:

% input images
imgObj = imread('box.png');
imgScene = imread('box_in_scene.png');

% detect keypoints and calculate descriptors using SURF
detector = cv.FeatureDetector('SURF');
keyObj = detector.detect(imgObj);
keyScene = detector.detect(imgScene);

extractor = cv.DescriptorExtractor('SURF');
featObj = extractor.compute(imgObj, keyObj);
featScene = extractor.compute(imgScene, keyScene);

% match descriptors using FLANN
matcher = cv.DescriptorMatcher('FlannBased');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

% extract keypoints from the filtered matches
% (C zero-based vs. MATLAB one-based indexing)
ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
ptsObj = num2cell(ptsObj, 2);
ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
ptsScene = num2cell(ptsScene, 2);

% compute homography
[H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

% remove outliers reported by RANSAC
inliers = logical(inliers);
m = m(inliers);

% show the final matches
imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
imshow(imgMatches)

% apply the homography to the corner points of the box
[h,w] = size(imgObj);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
p = bsxfun(@plus, p, [size(imgObj,2) 0]);

% draw lines between the transformed corners (the mapped object)
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
title('Matches & Object detection')

Now you can try one of the other algorithms for feature detection/extraction (ORB in your case). Just remember you might need to adjust some of the parameters above to get good results (for example the multiplier used to control how many of the keypoint matches to keep).


EDIT:

Like I said, there is no one size fits all solution in computer vision. You need to experiment by adjusting the various algorithm parameters to get good results on your data. For instance, the ORB constructor accepts a bunch of options. Also as the documentation suggests, the brute force matcher with Hamming distances is a recommended matcher for ORB descriptors.

Finally note that I specified RANSAC robust algorithm as method used for computing the homography matrix; Looking at the screenshot you posed, you can see an outlier match incorrectly pointing towards the black computer vision book in the scene. The advantage of the RANSAC method is that it can accurately perform estimation even when there is a large number of outliers in the data. The default method used by findHomography is to use all the points available.

Furthermore note that some of the control points used to estimate the homography in you case are almost co-linear, this might badly contribute to the computation (kind of like how numerically inverting a matrix close to singular is a bad idea).

With the above said, I am highlighting below the relevant parts of the code which gave me good results using ORB descriptors (the rest is unchanged from what I previously posted):

% detect keypoints and calculate descriptors using ORB
[keyObj,featObj] = cv.ORB(imgObj);
[keyScene,featScene] = cv.ORB(imgScene);

% match descriptors using brute force with Hamming distances
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

I noticed that you omitted the last part where I filtered the matches by dropping bad ones.. You could always look at the distribution of the "distances" of the matches found, and decide on an appropriate threshold. Here is what I had initially:

hist([m.distance])
title('Distribution of match distances')

You could also apply similar process on the raw keypoints based on their response values, and subsample the points accordingly:

subplot(121), hist([keyObj.response]); title('box')
subplot(122), hist([keyScene.response]); title('scene')

HTH

这篇关于在Matlab中计算opencv的透视变换时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆