如何优化多幅图像拼接? [英] How can I optimize Multiple image stitching?

查看:236
本文介绍了如何优化多幅图像拼接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在Visual Studio 2012 C ++中进行多图像拼接.我已经根据我修改了 stitching_detailed.cpp 要求,并给出质量结果.这里的问题是,执行需要太多时间.对于 10张图像,大​​约需要 110秒.

I'm working on Multiple image stitching in Visual Studio 2012, C++. I've modified stitching_detailed.cpp according to my requirement and it gives quality results. The problem here is, it takes too much time to execute. For 10 images, it takes around 110 seconds.

大部分时间都在这里:

1)成对匹配-10张图像需要 55秒!我正在使用ORB查找特征点.这是代码:

1) Pairwise matching - Takes 55 seconds for 10 images! I'm using ORB to find feature points. Here's the code:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);
matcher(features, pairwise_matches);
matcher.collectGarbage();

我已尝试使用此代码,因为我已经知道图像的顺序:

I tried using this code, as I already know the sequence of images:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);

Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)                                                 
    matchMask.at<char>(i,i+1) =1;                                                       
matcher(features, pairwise_matches, matchMask);                                         

matcher.collectGarbage();

它肯定会减少时间(18秒),但不会产生所需的结果.仅缝合了6张图像(由于图像6和图像7的特征点不匹配,最后4张被遗漏了.因此循环中断了.)

It definitely reduces the time (18 seconds), but does not produce required results. Only 6 images get stitched (last 4 are left out because image 6 and image 7 feature points somehow don't match. And so the loop breaks.)

2)合成-10张图像需要 38秒!这是代码:

2) Compositing - Takes 38 seconds for 10 images! Here's the code:

for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
    printf("Compositing image #%d\n",indices[img_idx]+1);

    // Read image and resize it if necessary
    full_img = imread(img_names[img_idx]);

    Mat K;
    cameras[img_idx].K().convertTo(K, CV_32F);

    // Warp the current image
    warper->warp(full_img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

    // Warp the current image mask
    mask.create(full_img.size(), CV_8U);
    mask.setTo(Scalar::all(255));
    warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);

    // Compensate exposure
    compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);

    img_warped.convertTo(img_warped_s, CV_16S);
    img_warped.release();
    full_img.release();
    mask.release();

    dilate(masks_warped[img_idx], dilated_mask, Mat());
    resize(dilated_mask, seam_mask, mask_warped.size());
    mask_warped = seam_mask & mask_warped;

    // Blend the current image
    blender->feed(img_warped_s, mask_warped, corners[img_idx]);
}

Mat result, result_mask;
blender->blend(result, result_mask);

原始图像分辨率为4160 * 3120.我没有在合成中使用压缩,因为它会降低质量.我在其余的代码中都使用了压缩图像.

The original image resolution is 4160*3120. I'm not using compression in compositing because it reduces quality. I've used compressed images in rest of the code.

如您所见,我已经修改了代码并减少了时间.但是我仍然想尽可能减少时间.

As you can see I've modified the code and reduced time. But I still want to reduce time as much as possible.

3)查找特征点-使用ORB. 10秒需要10张图像.查找1530个特征点至图像的最大值.

3) Finding Feature points - with ORB. Takes 10 seconds for 10 images. Finds 1530 feature points to the max for an image.

55 + 38 + 10 = 103 + 7 ,其余代码 = 110.

当我在android中使用此代码时,它几乎需要智能手机的整个内存(RAM)才能执行.如何减少Android设备的时间以及内存消耗? (我使用的Android设备具有2 GB RAM)

When I used this code in android, it takes almost whole memory(RAM) of smart-phone to execute. How can I reduce time as well as memory consumption for android device? (Android device I used has 2 GB RAM)

我已经优化了其余代码.任何帮助深表感谢!

I've already optimized the rest of the code. Any help is much appreciated!

我在合成步骤中使用了图像压缩,时间从38秒减少到16秒.我还设法减少了其余代码的时间.

EDIT 1: I used image compression in the compositing step and the time got reduced from 38 seconds to 16 seconds. I also managed to reduce time in the rest of the code.

现在,从 110-> 85秒开始.帮助我减少成对匹配的时间;我不知道减少它!

So now, from 110 -> 85 seconds. Help me reduce time for pairwise matching; I've no clue on reducing it!

我在

EDIT 2: I found the code of pairwise matching in matchers.cpp. I created my own function in the main code to optimize the time. For compositing step, I used compression until the final image doesn't lose clarity. For feature finding, I used image scaling to find image features at reduced image scale. Now I am able to stitch upto 50 images easily.

推荐答案

由于55到18秒是一个很好的改进,也许您可​​以控制匹配过程更多一点.我首先建议的是-如果您还没有这样做-请学习调试过程的每一步,以了解未固定图像时出了什么问题.这样,您将始终学会控制例如要检测的ORB功能的数量.也许在某些情况下,您可以限制它们并仍然获得结果,从而加快了处理速度(这不仅应该加快发现特征的速度,而且还应该加快匹配过程的速度).

Since 55 to 18 seconds is a pretty good improvement, maybe you can control the matching process a little bit more. What I would suggest first is - if you haven't already - learn to debug the process every step of the way, to understand what goes wrong when the image isn't stiched. That way you will always learn to control for example the number of ORB features that you're detecting. Maybe there are cases where you can limit them and still get the results, thus speeding up the process (this should not only speed up finding features but also the matching process).

希望-当您所说的-循环中断时,这将使您能够检测到这种情况.因此,您可以相应地对此情况做出反应.您仍然可以在循环中匹配序列,以节省时间,但是当发现匹配特定的对存在问题时,可以强制程序继续执行(或更改参数并尝试再次匹配该对).

Hopefully that will lead you to being able to detect the situation, when the - as you put it - loop breaks. Thus, you could react to the situation accordingly. You would still match the sequence in a loop, saving time but force the programme to continue (or change the parameters and try to match the pair again) when you detect there is a problem with matching that particular pair.

我认为这里的合成过程没有很大的改进空间,因为您不想失去质量.如果我是您的话,我会尝试研究的是线程和并行计算是否可以帮助您.

I don't think there is much room for improvement in the composition process here, since you don't want to lose quality. What I would try to research if I were you, is if maybe threading and parallel computing could help.

这是一个有趣且普遍存在的问题-如果您能够在不放弃质量的情况下加快速度,则应该致电LG或Google,因为在我的Nexus中,该算法的质量确实很差:)它既缓慢又不准确

This is an interesting and widespread issue - if you're able to speed it up without giving up quality, you should call LG or Google, since my in my Nexus the algorithm is really poor quality :) It's both slow and inaccurate.

这篇关于如何优化多幅图像拼接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆