来自特征点的 OpenCV 外部相机 [英] OpenCV extrinsic camera from feature points

查看:28
本文介绍了来自特征点的 OpenCV 外部相机的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我从每个相机的视图中获取对象的图片时,如何使用 OpenCV 检索每个相机的旋转矩阵、平移向量和一些缩放因子?对于每张图片,我都有几个特征点的图像坐标.并非所有特征点在所有图片中都可见.我想将计算得到的对象特征点的 3D 坐标映射到稍微不同的对象,以将第二个对象的形状与第一个对象对齐.

How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures. I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object.

我听说可以使用 cv::calibrateCamera(...) 但我无法完全理解它...

I heard it is possible using cv::calibrateCamera(...) but I can't get quite through it...

有人遇到过这种问题吗?

Does someone have experiences with that kind of problem?

推荐答案

我在 OpenCV 中遇到了和你一样的问题.我有一个立体图像对,我想计算相机的外部参数和所有观察点的世界坐标.此问题已在此处处理:

I was confronted with the same problem as you, in OpenCV. I had a stereo image pair and I wanted to computed the external parameters of the cameras and the world coordinates of all observed points. This problem has been treated here:

Berthold K. P. 霍恩.重新审视了相对方向.Berthold K. P. 霍恩.麻省理工学院人工智能实验室,545科技...

Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700

但是,我找不到此问题的合适实现(也许您会找到一个).由于时间限制,我没有时间理解本文中的所有数学并自己实现它,所以我想出了一个适合我的快速而简单的解决方案.我将解释我做了什么来解决它:

However, I wasn't able to find a suitable implementation of this problem (perhaps you will find one). Due to time limitations I did not have time to understand all the maths in this paper and implement it myself, so I came up with a quick-and-dirty solution that works for me. I will explain what I did to solve it:

假设我们有两个摄像头,其中第一个摄像头有外部参数 RT = Matx::eye().现在猜测第二个摄像机的旋转 R.对于在两幅图像中观察到的每一对图像点,我们计算它们在世界坐标中对应光线的方向,并将它们存储在二维数组 dirs 中(假设内部相机参数是已知的).我们可以这样做,因为我们假设我们知道每个相机的方向.现在我们建立一个超定线性系统AC = 0,其中C是第二个相机的中心.我为你提供了计算 A 的函数:

Assuming we have two cameras, where the first camera has external parameters RT = Matx::eye(). Now make a guess about the the rotation R of the second camera. For every pair of image points observed in both images, we compute the directions of their corresponding rays in world coordinates and store them in a 2d-array dirs ( The internal camera parameters are assumed to be known). We can do this since we assume that we know the orientation of every camera. Now we build an overdetermined linear system AC = 0 where C is the centre of the second camera. I provide you with the function to compute A:

Mat buildA(Matx<double, 3, 3> &R, Array<Vec3d, 2> dirs)
{
    CV_Assert(dirs.size(0) == 2);
    int pointCount = dirs.size(1);
    Mat A(pointCount, 3, DataType<double>::type);
    Vec3d *a = (Vec3d *)A.data;
    for (int i = 0; i < pointCount; i++)
    {
        a[i] = dirs(0, i).cross(toVec(R*dirs(1, i)));
        double length = norm(a[i]);
        if (length == 0.0)
        {
            CV_Assert(false);
        }
        else
        {
            a[i] *= (1.0/length);
        }
    }
    return A;
}

然后调用 cv::SVD::solveZ(A) 将为您提供该系统的范数 1 的最小二乘解.这样,您可以获得第二个相机的旋转和平移.然而,由于我只是猜测了第二台摄像机的旋转,所以我对它的旋转做了几个猜测(使用 3x1 向量 omega 进行参数化,我使用 cv::Rodrigues 计算旋转矩阵)然后我通过以下方式细化这个猜测在具有数值雅可比的 Levenberg-Marquardt 优化器中重复求解系统 AC = 0.它对我有用,但有点脏,所以如果你有时间,我鼓励你实现论文中解释的内容.

Then calling cv::SVD::solveZ(A) will give you the least-squares solution of norm 1 to this system. This way, you obtain the rotation and translation of the second camera. However, since I just made a guess about the rotation of the second camera, I make several guesses about its rotation (parameterized using a 3x1 vector omega from which i compute the rotation matrix using cv::Rodrigues) and then I refine this guess by solving the system AC = 0 repetedly in a Levenberg-Marquardt optimizer with numeric jacobian. It works for me but it is a bit dirty, so you if you have time, I encourage you to implement what is explained in the paper.

这是 Levenberg-Marquardt 优化器中用于评估残差向量的例程:

Here is the routine in the Levenberg-Marquardt optimizer for evaluating the vector of residues:

void Stereo::eval(Mat &X, Mat &residues, Mat &weights)
{

        Matx<double, 3, 3> R2Ref = getRot(X); // Map the 3x1 euler angle to a rotation matrix
        Mat A = buildA(R2Ref, _dirs); // Compute the A matrix that measures the distance between ray pairs
        Vec3d c;
        Mat cMat(c, false);
        SVD::solveZ(A, cMat); // Find the optimum camera centre of the second camera at distance 1 from the first camera
        residues = A*cMat; // Compute the  output vector whose length we are minimizing
    weights.setTo(1.0);
}

顺便说一句,我在互联网上搜索了更多内容,并找到了一些其他代码,这些代码可能对计算相机之间的相对方向很有用.我还没有尝试过任何代码,但它似乎很有用:

By the way, I searched a little more on the internet and found some other code that could be useful for computing the relative orientation between cameras. I haven't tried any code yet, but it seems useful:

http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html

http://lear.inrialpes.fr/people/triggs/src/

http://www.maths.lth.se/vision/downloads/

这篇关于来自特征点的 OpenCV 外部相机的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆