在 Matlab 中计算 opencv 透视变换时出错 [英] Error in calculating perspective transform for opencv in Matlab

查看:18
本文介绍了在 Matlab 中计算 opencv 透视变换时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试重新编码 不会自动调整这个.

  • 接受一堆选项.同样作为文档 建议,蛮力匹配器汉明距离是 ORB 描述符的推荐匹配器.

    最后请注意,我指定了 RANSAC 稳健算法作为用于计算单应矩阵的方法;查看您发布的屏幕截图,您可以看到 outlier 匹配错误地指向黑色计算机视觉书在现场.RANSAC 方法的优点是即使在数据中存在大量异常值时也能准确地进行估计.findHomography 是使用所有可用的点.

    此外,请注意,在您的情况下,用于估计单应性的一些控制点几乎是共线的,这可能对计算产生严重影响(有点像如何在数值上反转接近奇异的矩阵是一个坏主意).

    如上所述,我在下面突出显示代码的相关部分,这些部分使用 ORB 描述符给了我很好的结果(其余部分与我之前发布的内容没有变化):

    % 使用 ORB 检测关键点并计算描述符[keyObj,featObj] = cv.ORB(imgObj);[keyScene,featScene] = cv.ORB(imgScene);% 使用带有汉明距离的蛮力匹配描述符matcher = cv.DescriptorMatcher('BruteForce-Hamming');m = matcher.match(featObj, featScene);% 只保留好"匹配(其距离小于 k*min_dist )dist = [m.距离];m = m(dist <3*min(dist));

    我注意到您省略了最后一部分,我通过丢弃坏匹配项来过滤匹配项.您始终可以查看找到的匹配项的距离"分布,并确定适当的阈值.这是我最初的:

    hist([m.distance])title('匹配距离分布')

    您还可以根据响应值对原始关键点应用类似的过程,并相应地对这些点进行二次采样:

    subplot(121), hist([keyObj.response]);标题('盒子')子图(122),历史([keyScene.response]);标题('场景')

    HTH

    I am trying to recode feature matching and homography using mexopencv .Mexopencv ports OpenCV vision toolbox into Matlab .

    My code in Matlab using OpenCV toolbox:

    function hello
    
        close all;clear all;
    
        disp('Feature matching demo, press key when done');
    
        boxImage = imread('D:/pic/500_1.jpg');
    
        boxImage = rgb2gray(boxImage);
    
        [boxPoints,boxFeatures] = cv.ORB(boxImage);
    
        sceneImage = imread('D:/pic/100_1.jpg');
    
        sceneImage = rgb2gray(sceneImage);
    
        [scenePoints,sceneFeatures] = cv.ORB(sceneImage);
    
        if (isempty(scenePoints)|| isempty(boxPoints)) 
            return;
        end;
    
    
        matcher = cv.DescriptorMatcher('BruteForce');
        matches = matcher.match(boxFeatures,sceneFeatures);
    
    
        %Box contains pixels coordinates where there are matches
        box = [boxPoints([matches(2:end).queryIdx]).pt];
    
        %Scene contains pixels coordinates where there are matches
        scene = [scenePoints([matches(2:end).trainIdx]).pt];
    
        %Please refer to http://stackoverflow.com/questions/4682927/matlab-using-mat2cell
    
        %Box arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
        %after applying mat2cell function
        [nRows, nCols] = size(box);
        nSubCols = 2;
        box = mat2cell(box,nRows,nSubCols.*ones(1,nCols/nSubCols));
    
        %Scene arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
        %after applying mat2cell function
    
        [nRows, nCols] = size(scene);
        nSubCols = 2;
        scene = mat2cell(scene,nRows,nSubCols.*ones(1,nCols/nSubCols));
    
        %Finding homography between box and scene
        H = cv.findHomography(box,scene);
    
        boxCorners = [1, 1;...                           % top-left
            size(boxImage, 2), 1;...                 % top-right
            size(boxImage, 2), size(boxImage, 1);... % bottom-right
            1, size(boxImage, 1)];
    
      %Fine until this point , problem starts with perspectiveTransform   
      sceneCorners= cv.perspectiveTransform(boxCorners,H); 
    
    end
    

    The error:

        Error using cv.perspectiveTransform
    Unexpected Standard exception from MEX file.
    What()
    is:C:slaveuildsWinInstallerMegaPacksrcopencvmodulescoresrcmatmul.cpp:1926:
    error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F)
    
    ..
    
    Error in hello (line 58)
      sceneCorners= cv.perspectiveTransform(boxCorners,H);
    

    The problem starts from checking out the perspectiveTranform(boxCorners, H), until finding homography it was fine . Also note that while calculating the matching coordinates from the sample and the scene , I indexed from 2:end, box = [boxPoints([matches(2:end).queryIdx]).pt], since accessing the queryIdx of the 1st element would yield the zeroth position that couldn't be accessed . However , I think , this would not be a problem . Anyhow , I am looking forward for an answer to my solution . Thanks.

    PS:This is an edited version of my original post here . The solution I received below ,was not adequate enough , and the bug kept recurring .

    2nd Update:

    According to @Amro , I have updated my code ,below . The inliers gives good response , however the coordinates for calculating perspective transform somehow got twisted.

    function hello
        close all; clear all; clc;
    
        disp('Feature matching with ORB');
    
        %Feature detector and extractor for object
        imgObj = imread('D:/pic/box.png');
        %boxImage = rgb2gray(boxImage);
        [keyObj,featObj] = cv.ORB(imgObj);
    
        %Feature detector and extractor for scene
        imgScene = imread('D:/pic/box_in_scene.png');
        %sceneImage = rgb2gray(sceneImage);
        [keyScene,featScene] = cv.ORB(imgScene);
    
        if (isempty(keyScene)|| isempty(keyObj)) 
            return;
        end;
    
        matcher = cv.DescriptorMatcher('BruteForce-HammingLUT');
        m = matcher.match(featObj,featScene);
    
        %im_matches = cv.drawMatches(boxImage, boxPoints, sceneImage, scenePoints,m);
    
        % extract keypoints from the filtered matches
        % (C zero-based vs. MATLAB one-based indexing)
        ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
        ptsObj = num2cell(ptsObj, 2);
        ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
        ptsScene = num2cell(ptsScene, 2);
    
        % compute homography
        [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');
    
        % remove outliers reported by RANSAC
        inliers = logical(inliers);
        m = m(inliers);
    
        % show the final matches
        imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
        'NotDrawSinglePoints',true);
        imshow(imgMatches);
    
        % apply the homography to the corner points of the box
        [h,w] = size(imgObj);
        corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
        p = cv.perspectiveTransform(corners, H)
        p = permute(p, [2 3 1])
        p = bsxfun(@plus, p, [size(imgObj,2) 0]);
    
        % draw lines between the transformed corners (the mapped object)
        opts = {'Color',[0 255 0], 'Thickness',4};
        imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
        imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
        imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
        imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
        imshow(imgMatches)
        title('Matches & Object detection')
    
    end
    

    The output is fine , however , the perspectiveTranform is not giving the right coordinates apropos to the problem . My output thus far :

    3rd Update:

    I have got all of the code running and fine with the homography . However , a corner case is bugging me really hard . If I do imgObj = imread('D:/pic/box.png') and imgScene = imread('D:/pic/box_in_scene.png') , I get the homography rectangle good and fine , however , when I do imgScene = imread('D:/pic/box.png') , i.e the object and scene are the same , I get this error -

    Error using cv.findHomography
    Unexpected Standard exception from MEX file.
    What()
    is:C:slaveuildsWinInstallerMegaPacksrcopencvmodulescalib3dsrcfundam.cpp:1074:
    error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() ==
    points2.type()
    
    ..
    
    Error in hello (line 37)
        [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');
    

    Now , I have came across this error in the past , this happens when the number of ptsObj or ptsScene is low , e.g, when the scene is nothing but a white/black screen , keypoints of that scene is zero . In this particular problem there is ample amount of ptsObj and ptsScene. Where can the problem lie . I have tested this code using SURFan the same error is resurfacing .

    解决方案

    A couple of remarks:

    • the matcher returns zero-based indices (as well as various other functions on account of OpenCV being implemented in C++). So if you want to get the corresponding keypoints you have to adjust by one (MATLAB arrays are one-based). mexopencv intentionally does not automatically adjust for this.

    • The cv.findHomography MEX-function accepts points either as a numeric array of size 1xNx2 (e.g: cat(3, [x1,x2,...], [y1,y2,...])) or as an N-sized cell array of two-element vectors each (i.e {[x1,y1], [x2,y2], ...}). In this case, I'm not sure your code is packing the points correctly, either way it could be made much simpler..

    Here is the complete demo translated from C++ to MATLAB:

    % input images
    imgObj = imread('box.png');
    imgScene = imread('box_in_scene.png');
    
    % detect keypoints and calculate descriptors using SURF
    detector = cv.FeatureDetector('SURF');
    keyObj = detector.detect(imgObj);
    keyScene = detector.detect(imgScene);
    
    extractor = cv.DescriptorExtractor('SURF');
    featObj = extractor.compute(imgObj, keyObj);
    featScene = extractor.compute(imgScene, keyScene);
    
    % match descriptors using FLANN
    matcher = cv.DescriptorMatcher('FlannBased');
    m = matcher.match(featObj, featScene);
    
    % keep only "good" matches (whose distance is less than k*min_dist )
    dist = [m.distance];
    m = m(dist < 3*min(dist));
    
    % extract keypoints from the filtered matches
    % (C zero-based vs. MATLAB one-based indexing)
    ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
    ptsObj = num2cell(ptsObj, 2);
    ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
    ptsScene = num2cell(ptsScene, 2);
    
    % compute homography
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');
    
    % remove outliers reported by RANSAC
    inliers = logical(inliers);
    m = m(inliers);
    
    % show the final matches
    imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
        'NotDrawSinglePoints',true);
    imshow(imgMatches)
    
    % apply the homography to the corner points of the box
    [h,w] = size(imgObj);
    corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
    p = cv.perspectiveTransform(corners, H);
    p = permute(p, [2 3 1]);
    p = bsxfun(@plus, p, [size(imgObj,2) 0]);
    
    % draw lines between the transformed corners (the mapped object)
    opts = {'Color',[0 255 0], 'Thickness',4};
    imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
    imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
    imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
    imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
    imshow(imgMatches)
    title('Matches & Object detection')
    

    Now you can try one of the other algorithms for feature detection/extraction (ORB in your case). Just remember you might need to adjust some of the parameters above to get good results (for example the multiplier used to control how many of the keypoint matches to keep).


    EDIT:

    Like I said, there is no one size fits all solution in computer vision. You need to experiment by adjusting the various algorithm parameters to get good results on your data. For instance, the ORB constructor accepts a bunch of options. Also as the documentation suggests, the brute force matcher with Hamming distances is a recommended matcher for ORB descriptors.

    Finally note that I specified RANSAC robust algorithm as method used for computing the homography matrix; Looking at the screenshot you posed, you can see an outlier match incorrectly pointing towards the black computer vision book in the scene. The advantage of the RANSAC method is that it can accurately perform estimation even when there is a large number of outliers in the data. The default method used by findHomography is to use all the points available.

    Furthermore note that some of the control points used to estimate the homography in you case are almost co-linear, this might badly contribute to the computation (kind of like how numerically inverting a matrix close to singular is a bad idea).

    With the above said, I am highlighting below the relevant parts of the code which gave me good results using ORB descriptors (the rest is unchanged from what I previously posted):

    % detect keypoints and calculate descriptors using ORB
    [keyObj,featObj] = cv.ORB(imgObj);
    [keyScene,featScene] = cv.ORB(imgScene);
    
    % match descriptors using brute force with Hamming distances
    matcher = cv.DescriptorMatcher('BruteForce-Hamming');
    m = matcher.match(featObj, featScene);
    
    % keep only "good" matches (whose distance is less than k*min_dist )
    dist = [m.distance];
    m = m(dist < 3*min(dist));
    

    I noticed that you omitted the last part where I filtered the matches by dropping bad ones.. You could always look at the distribution of the "distances" of the matches found, and decide on an appropriate threshold. Here is what I had initially:

    hist([m.distance])
    title('Distribution of match distances')
    

    You could also apply similar process on the raw keypoints based on their response values, and subsample the points accordingly:

    subplot(121), hist([keyObj.response]); title('box')
    subplot(122), hist([keyScene.response]); title('scene')
    

    HTH

    这篇关于在 Matlab 中计算 opencv 透视变换时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆