如何使用openCV用广角镜正确校准相机? [英] How to correctly calibrate my camera with a wide angle lens using openCV?

查看:113
本文介绍了如何使用openCV用广角镜正确校准相机?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试用鱼眼镜头校准相机.因此,我使用了鱼眼镜头模块,但是无论我修复了什么畸变参数,都会得到奇怪的结果. 这是我使用的输入图像: https://i.imgur.com/apBuAwF.png

I am trying to calibrate a camera with a fisheye lens. I therefor used the fisheye lens module, but keep getting strange results no matter what distortion parameters I fix. This is the input image I use: https://i.imgur.com/apBuAwF.png

红色圆圈表示我用来校准相机的角.

where the red circles indicate the corners I use to calibrate my camera.

这是我能得到的最好结果,输出: https://imgur.com/a/XeXk5

This is the best I could get, output: https://imgur.com/a/XeXk5

我目前根本不知道相机传感器的尺寸是多少,但是基于硝化矩阵中正在计算的以像素为单位的焦距,我推断出传感器的尺寸约为3.3mm(假设我的物理焦距是是1.8毫米),对我来说似乎很现实.但是,当使输入图像失真时,我会胡说八道.有人可以告诉我我可能做错了什么吗?

I currently don't know by heart what the camera sensor dimensions are, but based on the focal length in pixels that is being calculated in my nitrinsic matrix, I deduce my sensor size is approximately 3.3mm (assuming my physical focal length is 1.8mm), which seems realistic to me. Yet, when undistorting my input image I get nonsense. Could someone tell me what I may be doing incorrectly?

校准输出的矩阵和均方根值:

the matrices and rms being output by the calibration:

K:[263.7291703200009, 0, 395.1618975493187;
 0, 144.3800397321767, 188.9308218101271;
 0, 0, 1]

D:[0, 0, 0, 0]

rms: 9.27628

我的代码:

#include <opencv2/opencv.hpp>
#include "opencv2/core.hpp"
#include "opencv2/imgcodecs.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/ccalib/omnidir.hpp"

using namespace std;
using namespace cv;

vector<vector<Point2d> > points2D;
vector<vector<Point3d> > objectPoints;

Mat src;

//so that I don't have to select them manually every time
void initializePoints2D()
{
    points2D[0].push_back(Point2d(234, 128));
    points2D[0].push_back(Point2d(300, 124));
    points2D[0].push_back(Point2d(381, 126));
    points2D[0].push_back(Point2d(460, 127));
    points2D[0].push_back(Point2d(529, 137));
    points2D[0].push_back(Point2d(207, 147));
    points2D[0].push_back(Point2d(280, 147));
    points2D[0].push_back(Point2d(379, 146));
    points2D[0].push_back(Point2d(478, 153));
    points2D[0].push_back(Point2d(551, 165));
    points2D[0].push_back(Point2d(175, 180));
    points2D[0].push_back(Point2d(254, 182));
    points2D[0].push_back(Point2d(377, 185));
    points2D[0].push_back(Point2d(502, 191));
    points2D[0].push_back(Point2d(586, 191));
    points2D[0].push_back(Point2d(136, 223));
    points2D[0].push_back(Point2d(216, 239));
    points2D[0].push_back(Point2d(373, 253));
    points2D[0].push_back(Point2d(534, 248));
    points2D[0].push_back(Point2d(624, 239));
    points2D[0].push_back(Point2d(97, 281));
    points2D[0].push_back(Point2d(175, 322));
    points2D[0].push_back(Point2d(370, 371));
    points2D[0].push_back(Point2d(578, 339));
    points2D[0].push_back(Point2d(662, 298));


    for(int j=0; j<25;j++)
    {   
        circle(src, points2D[0].at(j), 5, Scalar(0, 0, 255), 1, 8, 0);
    }

    imshow("src with circles", src);
    waitKey(0);
}

int main(int argc, char** argv)
{
    Mat srcSaved;

    src = imread("images/frontCar.png");
    resize(src, src, Size(), 0.5, 0.5);
    src.copyTo(srcSaved);

    vector<Point3d> objectPointsRow;
    vector<Point2d> points2DRow;
    objectPoints.push_back(objectPointsRow);
    points2D.push_back(points2DRow);

    for(int i=0; i<5;i++)
    {

        for(int j=0; j<5;j++)
        {
            objectPoints[0].push_back(Point3d(5*j,5*i,1));        
        }
    }

    initializePoints2D();
    cv::Matx33d K;
    cv::Vec4d D;
    std::vector<cv::Vec3d> rvec;
    std::vector<cv::Vec3d> tvec;


    int flag = 0;
    flag |= cv::fisheye::CALIB_RECOMPUTE_EXTRINSIC;
    flag |= cv::fisheye::CALIB_CHECK_COND;
    flag |= cv::fisheye::CALIB_FIX_SKEW; 
    flag |= cv::fisheye::CALIB_FIX_K1; 
    flag |= cv::fisheye::CALIB_FIX_K2; 
    flag |= cv::fisheye::CALIB_FIX_K3; 
    flag |= cv::fisheye::CALIB_FIX_K4; 


    double rms =cv::fisheye::calibrate(
 objectPoints, points2D, src.size(), 
 K, D, rvec, tvec, flag, cv::TermCriteria(3, 20, 1e-6)     
 );

    Mat output;
    cerr<<"K:"<<K<<endl;
    cerr<<"D:"<<D<<endl;
    cv::fisheye::undistortImage(srcSaved, output, K, D);
    cerr<<"rms: "<<rms<<endl;
    imshow("output", output);
    waitKey(0);

    cerr<<"image .size: "<<srcSaved.size()<<endl;

}

如果有人有想法,请随时使用C ++在Python中共享一些代码.随便你的船.

If anybody has an idea, feel free to either share some code in Python either in C++. Whatever floats your boat.

您可能已经注意到,我不使用黑白棋盘格进行校准,而是使用构成地毯的瓷砖的角部.在一天结束时,我认为目标是获取代表来自变形半径的样本的角坐标.地毯在某种程度上与棋盘格相同,唯一的区别(我认为是再一次)是一个事实,即您在地毯上的那些角落(例如,角落)比黑白棋盘格上的高频边缘更少.

As you may have notice I don't use a black and white checkerboard for the calibration, but corners from tiles constituting my carpet. At the end of the day the goal -I think- is to get corner coordinates which represent samples from the distortion radii . The carpet is to some extent the same as the checkerboard, the only difference -once again I think- is the fact that you have less high frequency edges at those eg corners on the carpet than on a black and white checkerboard.

我知道照片的数量非常有限,即只有1张.我希望图像在某种程度上不会失真,但是我也希望不失真效果很好.但是在这种情况下,图像输出看起来像是胡说八道.

I know the number of pictures is very limited, ie only 1. I expect the image to be undistorted to some extent, but I also expect the undistortion to be very well done. But in this case the image output looks like total nonsense.

我最终将此图像与棋盘一起使用: https://imgur.com/a/WlLBR 此网站提供的内容: https://sites.google .com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page 但是结果仍然很差:对角线就像我发布的其他输出图像一样.

I ended up using this image with a chessboard: https://imgur.com/a/WlLBR provided by this website: https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page But results are still very poor: diagonal lines like the other output image I posted.

谢谢

推荐答案

您的第一个问题是您仅使用一个图像.即使您拥有理想的无畸变的针孔相机,您也将无法从共面点的单个图像中估计内在特性.共面点的一张图像根本没有给您足够的约束来求解内在函数.

Your first problem is that you are only using one image. Even if you had an ideal pinhole camera with no distortion, you would not be able to estimate the intrinsics from a single image of co-planar points. One image of co-planar points simply does not give you enough constraints to solve for the intrinsics.

您至少需要两张不同3D方向的图像,或者3D校准装置,这些点的点不在同一平面上.当然,实际上,您至少需要20张图像才能进行准确的校准.

You need at least two images at different 3D orientations, or a 3D calibration rig, where the points are not co-planar. Of course, in practice you need at least 20 images for accurate calibration.

您的第二个问题是您正在使用地毯作为棋盘格.您需要能够以亚像素精度检测图像中的点.较小的定位误差会导致估计的相机参数出现较大误差.我严重怀疑您是否可以以任何合理的准确性检测到地毯正方形的角.实际上,由于模糊,您甚至不能非常准确地测量地毯上的实际点位置.

Your second problem is that you are using a carpet as the checkerboard. You need to be able to detect the points in the image with sub-pixel accuracy. Small localization errors result in large errors in the estimated camera parameters. I seriously doubt that you can detect the corners of the squares of your carpet with any reasonable accuracy. In fact, you cannot even measure the actual point locations on the carpet very accurately, because it is fuzzy.

祝你好运!

这篇关于如何使用openCV用广角镜正确校准相机?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆