计算C ++中类似图像的偏移/偏移/旋转 [英] Calculate offset/skew/rotation of similar images in C++

查看:188
本文介绍了计算C ++中类似图像的偏移/偏移/旋转的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我同时从同一起始位置指向同一方向拍摄多张图像。但是,仍然有轻微的偏移,因为这些相机不是在确切相同的地方,当照片拍摄。我正在寻找一种方法来计算应用于匹配一个图像到另一个图像所需的最佳平移/剪切/偏斜/旋转,以便它们(几乎)完美地重叠。



图片格式为.raw格式,我一次只能以16位的格式阅读。



(我的雇主不是程序员[我是实习生btw])采取一部分源图像(不在边缘)和强力搜索具有高相关性的相同大小的部分数据值。我希望有一个更少浪费的算法。

解决方案

这里是一个短代码, openCV 2.2):


  1. 假设您有2张图片:srcImage,dstImage,并且您要对齐它们

  2. 代码非常简单。

  3. $ b

    代码:

      //检测每个可以对应的图像上的特殊点
    Ptr< FeatureDetector>检测器=新的SurfFeatureDetector(2000); // Detector for features

    vector< KeyPoint> srcFeatures; //第一个图像上检测到的关键点
    矢量< KeyPoint> dstFeatures;
    detect-> detect(srcImage,srcFeatures);
    detect-> detect(dstImage,dstFeatures);

    //提取特征的描述符
    SurfDescriptorExtractor extractor;
    Mat projDescriptors,camDescriptors;
    extractor.compute(srcImage,srcFeatures,srcDescriptors);
    extractor.compute(dstImage,dstFeatures,dstDescriptors);

    //匹配2个图像的描述符(找到对应点的对)
    BruteForceMatcher< L2< float>匹配器//使用FlannBasedMatcher匹配器。它是更好的
    矢量< DMatch>火柴;
    matcher.match(srcDescriptors,dstDescriptors,matches);

    //提取点对
    vector< int> pairOfsrcKP(matches.size()),pairOfdstKP(matches.size());
    for(size_t i = 0; i< matches.size(); i ++){
    pairOfsrcKP [i] = matches [i] .queryIdx;
    pairOfdstKP [i] = matches [i] .trainIdx;
    }

    vector< Point2f> sPoints; KeyPoint :: convert(srcFeatures,sPoints,pairOfsrcKP);
    vector< Point2f> d点; KeyPoint :: convert(dstFeatures,dPoints,pairOfdstKP);

    //匹配的2D点对。这些对将用于计算单应性
    Mat src2Dfeatures;
    Mat dst2Dfeatures;
    Mat(sPoints).copyTo(src2Dfeatures);
    Mat(dPoints).copyTo(dst2Dfeatures);

    //计算单应性
    矢量< uchar> outlierMask;
    Mat H;
    H = findHomography(src2Dfeatures,dst2Dfeatures,outlierMask,RANSAC,3);

    //显示结果(仅用于调试)
    if(debug){
    Mat outimg;
    drawMatches(srcImage,srcFeatures,dstImage,dstFeatures,matches,outimg,Scalar :: all(-1),Scalar :: all(-1),
    reinterpret_cast< const vector< char>& ;(outlierMask));
    imshow(Matches:Src image(left)to dst(right),outimg);
    cvWaitKey(0);
    }

    //现在你得到的单应性。我的意思是:H(srcImage)被指派给dstImage。使用以下代码应用H
    Mat AlignedSrcImage;
    warpPerspective(srcImage,AlignedSrcImage,H,dstImage.Size(),INTER_LINEAR,BORDER_CONSTANT);
    Mat AlignedDstImageToSrc;
    warpPerspective(dstImage,AlignedDstImageToSrc,H.inv(),srcImage.Size(),INTER_LINEAR,BORDER_CONSTANT);


    I have multiple images taken simultaneously pointing at the same direction from the same starting location. However, there is still a slight offset because these cameras were not in the exact same place when the picture was taking. I'm looking for a way to calculate the optimal translation/shear/skew/rotation needed to apply to match one image to another so that they overlay (almost) perfectly.

    The images are in a .raw format which I am reading in 16 bits at a time.

    I have been suggested (by my employer who is not a programmer [I'm an intern btw]) to take a portion of the source image (not at the edges) and brute-force search for a same-sized portion with a high correlation in data values. I'm hoping there is a less-wasteful algorithm.

    解决方案

    Here is a short code that does what you want (I use openCV 2.2):

    1. Suppose you have 2 images: srcImage,dstImage, and you want to align them
    2. The code is very simple. Use it as basis for your algorithm.

    Code:

    // Detect special points on each image that can be corresponded    
    Ptr<FeatureDetector>  detector = new SurfFeatureDetector(2000);  // Detector for features
    
    vector<KeyPoint> srcFeatures;   // Detected key points on first image
    vector<KeyPoint> dstFeatures;
    detector->detect(srcImage,srcFeatures);
    detector->detect(dstImage,dstFeatures); 
    
    // Extract descriptors of the features
    SurfDescriptorExtractor extractor;  
    Mat projDescriptors, camDescriptors;
    extractor.compute(srcImage,  srcFeatures, srcDescriptors);
    extractor.compute(dstImage , dstFeatures, dstDescriptors );
    
    // Match descriptors of 2 images (find pairs of corresponding points)
    BruteForceMatcher<L2<float>> matcher;       // Use FlannBasedMatcher matcher. It is better
    vector<DMatch> matches;
    matcher.match(srcDescriptors, dstDescriptors, matches);     
    
    // Extract pairs of points
    vector<int> pairOfsrcKP(matches.size()), pairOfdstKP(matches.size());
    for( size_t i = 0; i < matches.size(); i++ ){
        pairOfsrcKP[i] = matches[i].queryIdx;
        pairOfdstKP[i] = matches[i].trainIdx;
    }
    
    vector<Point2f> sPoints; KeyPoint::convert(srcFeatures, sPoints,pairOfsrcKP);
    vector<Point2f> dPoints; KeyPoint::convert(dstFeatures, dPoints,pairOfdstKP);
    
    // Matched pairs of 2D points. Those pairs will be used to calculate homography
    Mat src2Dfeatures;
    Mat dst2Dfeatures;
    Mat(sPoints).copyTo(src2Dfeatures);
    Mat(dPoints).copyTo(dst2Dfeatures);
    
    // Calculate homography
    vector<uchar> outlierMask;
    Mat H;
    H = findHomography( src2Dfeatures, dst2Dfeatures, outlierMask, RANSAC, 3);
    
    // Show the result (only for debug)
    if (debug){
       Mat outimg;
       drawMatches(srcImage, srcFeatures,dstImage, dstFeatures, matches, outimg, Scalar::all(-1), Scalar::all(-1),
                   reinterpret_cast<const vector<char>&> (outlierMask));
       imshow("Matches: Src image (left) to dst (right)", outimg);
       cvWaitKey(0);
    }
    
    // Now you have the resulting homography. I mean that:  H(srcImage) is alligned to dstImage. Apply H using the below code
    Mat AlignedSrcImage;
    warpPerspective(srcImage,AlignedSrcImage,H,dstImage.Size(),INTER_LINEAR,BORDER_CONSTANT);
    Mat AlignedDstImageToSrc;
    warpPerspective(dstImage,AlignedDstImageToSrc,H.inv(),srcImage.Size(),INTER_LINEAR,BORDER_CONSTANT);
    

    这篇关于计算C ++中类似图像的偏移/偏移/旋转的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆