安卓& OpenCV:从照相机到照相机姿势的单应性考虑照相机的固有特性和反投影 [英] Android & OpenCV: Homography to Camera Pose considering Camera Intrinsics and Backprojection

查看:100
本文介绍了安卓& OpenCV:从照相机到照相机姿势的单应性考虑照相机的固有特性和反投影的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

库:OpenCV 目标:Android(OpenCV4Android)

Libs: OpenCV Target: Android (OpenCV4Android)

我尝试计算世界平面(例如监视器屏幕)的Homography,以获取相机姿态,对其进行转换并重新投影点以进行跟踪任务. 我正在使用OpenCV的findHomography()/getPerspectiveTransform()获得单应性.使用PerspectiveTransform()重新投影点(如此处所述: http: //docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html ),效果很好. "screenPoints"是显示器边缘的世界坐标(使用纵横比和z值为0),"imagePoints"是图像中屏幕边缘的x/y坐标.

I try to compute the Homography of a world plane (e.g. monitor screen) to get the camera pose, transform it and reproject the points back for tracking tasks. I'm using OpenCVs findHomography() / getPerspectiveTransform() to get the homography. The reprojection of the points using perspectiveTransform() (as explained here: http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html) which works pretty well. The "screenPoints" are the world coordinates of the monitor edges (using the aspect ratio and a z-value of 0) and the "imagePoints" are the x/y-coordinates of the screen edges in the image.

Mat homography = org.opencv.imgproc.Imgproc.getPerspectiveTransform(screenPoints, imagePoints);

我有相机校准矩阵(我已经使用过matlab校准工具箱),并且发现了一个提示(在注释中,@

I have the camera calibration matrix (I have used the matlab calibration toolbox) and I found a hint (in the comments @ https://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers) for considering the camera parameters in the homography.

H'= K ^ -1 * H

H' = K^-1 * H

(H'-考虑相机校准的单应矩阵,H-相机矩阵,K ^ -1-反相机校准矩阵).

(H' - Homography-Matrix considering camera calibration, H - Homography-Matrix, K^-1 - inverse camera calibration matrix).

Mat intrinsicInverse = new Mat(3, 3, CvType.CV_32FC1);
Core.invert(intrinsic, intrinsicInverse);
intrinsicInverse.convertTo(intrinsicInverse, CvType.CV_32FC1);          
homography.convertTo(homography, CvType.CV_32FC1);
// compute H respect the intrinsics
Core.gemm(intrinsicInverse, homography, 1, new Mat(), 0, homography);

我的下一步是从单应性计算相机姿势,如此处所述

My next step ist to compute the camera pose from homography as decribed here Computing camera pose with homography matrix based on 4 coplanar points.

自从我试图在Android上执行此操作以来,我不得不将C ++代码移植到Java:

Since im trying to do this on Android i had to port the C++ Code to Java:

private Mat cameraPoseFromHomography(Mat h) {
    Log.d("DEBUG", "cameraPoseFromHomography: homography " + matToString(h));

    Mat pose = Mat.eye(3, 4, CvType.CV_32FC1);  // 3x4 matrix, the camera pose
    float norm1 = (float) Core.norm(h.col(0));
    float norm2 = (float) Core.norm(h.col(1));
    float tnorm = (norm1 + norm2) / 2.0f;       // Normalization value

    Mat normalizedTemp = new Mat();
    Core.normalize(h.col(0), normalizedTemp);
    normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
    normalizedTemp.copyTo(pose.col(0));

    Core.normalize(h.col(1), normalizedTemp);
    normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);    
    normalizedTemp.copyTo(pose.col(1));

    Mat p3 = pose.col(0).cross(pose.col(1));
    p3.copyTo(pose.col(2));

    Mat temp = h.col(2);
    double[] buffer = new double[3];
    h.col(2).get(0, 0, buffer);
    pose.put(0, 3, buffer[0] / tnorm);
    pose.put(1, 3, buffer[1] / tnorm);
    pose.put(2, 3, buffer[2] / tnorm);

    return pose;
}

我无法检查代码是否在做正确的事情,但是它正在运行. 在这一点上,考虑到相机的校准,我假设相机处于完整姿势.

I can't check if the code is doing the right thing but it's running. At this point I assume to have the full camera pose considering the camera calibration.

如此处所述 http://opencv.willowgarage.com/documentation/python/calib3d_camera_calibration_and_3d_reconstruction.html#rodrigues2 ,3D点的重新投影只是

As described here http://opencv.willowgarage.com/documentation/python/calib3d_camera_calibration_and_3d_reconstruction.html#rodrigues2, the reprojection of a 3D-Point is just

p = K * CP * P

p = K * CP * P

(p-2D位置,K-校准矩阵,CP-摄像机姿态,P-3D点)

(p - 2D-Position, K - calibration matrix, CP - camera pose, P - 3D-Point)

    Core.gemm(intrinsic, cameraPosition, 1, new Mat(), 0, vec4t);
    Core.gemm(vec4t, point, 1, new Mat(), 0, result);

结果与屏幕边缘的源图像位置相距甚远.但是我可以通过它们的相对差异来识别所有三个边缘-因此这可能只是某些错误的因素.

The result is far away from the source image positions of the screen edges. But I can identify all three edges by its relative differences - so it might be just some factor which is wrong.

这是我第一次执行这样的计算机视觉任务,并且可能我做了一些基本的错误.我有一本齐瑟曼(Zisserman)的多视图几何"(Multiple View Geometry)书,并且阅读了所有相关部分-但老实说-我没有得到很多.

It's the first time I'm doing such a Computer Vision task and it's possible I did some basically wrong. I have the "Multiple View Geometry" book from Zisserman and I read all related parts - but to be honest - I didn't get most of it.

更新:

在我的相机矩阵中发现了一个错误-上面的实现工作正常!

Found a bug in my camera matrix - the implementation above is just working fine!

推荐答案

让它以另一种方式工作.而不是使用findHomography()/getP erspectiveTransform()我发现了另一个名为SolvePnP()的函数,该函数基于世界和图像点以及一个固有的相机矩阵来返回相机姿势.

Get it to work on another way. Instead of using findHomography()/getP erspectiveTransform() i found another function called solvePnP() which returns the camera pose based on world and images points and an intrinsic camera matrix.

结合使用该功能和projectPoints()方法-我能够将3d点重新投影回图像.

Using that function in combination with the projectPoints() method - i was able to reproject the 3d points back to the image.

如果屏幕边缘出现问题,则将其放置在图像的正确位置.

In case of the screen edges there are placed on the right spot in the image.

更新:

我在实现中发现了一个错误-我的相机固有矩阵是错误的.上面的单应实现的相机姿势对我有用!

I found a bug in my implementation - my camera intrinsic matrix was wrong. The camera pose from homography implementation above is working for me!

这篇关于安卓& OpenCV:从照相机到照相机姿势的单应性考虑照相机的固有特性和反投影的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆