Android的&放大器; OpenCV的:单应到相机姿态考虑相机内部函数和反投影[解决] [英] Android & OpenCV: Homography to Camera Pose considering Camera Intrinsics and Backprojection [Solved]

查看:938
本文介绍了Android的&放大器; OpenCV的:单应到相机姿态考虑相机内部函数和反投影[解决]的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

利布斯:OpenCV的 目标:安卓(OpenCV4Android)

Libs: OpenCV Target: Android (OpenCV4Android)

我试着计算世界平面的单应(如显示屏),让相机的姿势,将其转换和重新投影点回跟踪任务。 我使用OpenCVs findHomography()/ getPerspectiveTransform()来获得单应。使用perspectiveTransform()点的重投影(如下解释:的http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html)其中工程pretty的井的screenPoints是在显示器的边缘(使用长宽比和0的z值)和imagePoints的世界坐标是在x /屏幕边缘的y坐标图像

I try to compute the Homography of a world plane (e.g. monitor screen) to get the camera pose, transform it and reproject the points back for tracking tasks. I'm using OpenCVs findHomography() / getPerspectiveTransform() to get the homography. The reprojection of the points using perspectiveTransform() (as explained here: http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html) which works pretty well. The "screenPoints" are the world coordinates of the monitor edges (using the aspect ratio and a z-value of 0) and the "imagePoints" are the x/y-coordinates of the screen edges in the image.

Mat homography = org.opencv.imgproc.Imgproc.getPerspectiveTransform(screenPoints, imagePoints);

我有摄像机标定矩阵(我用MATLAB标定工具箱),我发现了一个提示(在评论@ <一href="http://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers">http://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers)审议中的单应相机参数。

I have the camera calibration matrix (i have used the matlab calibration toolbox) and i found a hint (in the comments @ http://dsp.stackexchange.com/questions/2736/step-by-step-camera-pose-estimation-for-visual-tracking-and-planar-markers) for considering the camera parameters in the homography.

H'= K ^ -1 * H

H' = K^-1 * H

(H' - 单应矩阵考虑摄像机标定,H - 单应矩阵,K ^ -1 - 逆摄像机标定矩阵)。

(H' - Homography-Matrix considering camera calibration, H - Homography-Matrix, K^-1 - inverse camera calibration matrix).

Mat intrinsicInverse = new Mat(3, 3, CvType.CV_32FC1);
Core.invert(intrinsic, intrinsicInverse);
intrinsicInverse.convertTo(intrinsicInverse, CvType.CV_32FC1);          
homography.convertTo(homography, CvType.CV_32FC1);
// compute H respect the intrinsics
Core.gemm(intrinsicInverse, homography, 1, new Mat(), 0, homography);

我的下一个步骤IST计算相机单应伪装成描述下这里的Computing基于4共面点的摄像头姿势,带有单应矩阵。

My next step ist to compute the camera pose from homography as decribed here Computing camera pose with homography matrix based on 4 coplanar points.

由于IM试图做到这一点在Android上我不得不端口C ++ code到Java:

Since im trying to do this on Android i had to port the C++ Code to Java:

private Mat cameraPoseFromHomography(Mat h) {
    Log.d("DEBUG", "cameraPoseFromHomography: homography " + matToString(h));

    Mat pose = Mat.eye(3, 4, CvType.CV_32FC1);  // 3x4 matrix, the camera pose
    float norm1 = (float) Core.norm(h.col(0));
    float norm2 = (float) Core.norm(h.col(1));
    float tnorm = (norm1 + norm2) / 2.0f;       // Normalization value

    Mat normalizedTemp = new Mat();
    Core.normalize(h.col(0), normalizedTemp);
    normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
    normalizedTemp.copyTo(pose.col(0));

    Core.normalize(h.col(1), normalizedTemp);
    normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);    
    normalizedTemp.copyTo(pose.col(1));

    Mat p3 = pose.col(0).cross(pose.col(1));
    p3.copyTo(pose.col(2));

    Mat temp = h.col(2);
    double[] buffer = new double[3];
    h.col(2).get(0, 0, buffer);
    pose.put(0, 3, buffer[0] / tnorm);
    pose.put(1, 3, buffer[1] / tnorm);
    pose.put(2, 3, buffer[2] / tnorm);

    return pose;
}

我不能检查,如果code是做正确的事情,但它的运行。 在这一点上,我认为有完整的相机姿势考虑摄像机标定。

I can't check if the code is doing the right thing but it's running. At this point i assume to have the full camera pose considering the camera calibration.

如这里所描述<一href="http://opencv.willowgarage.com/documentation/python/calib3d_camera_calibration_and_3d_reconstruction.html#rodrigues2" rel="nofollow">http://opencv.willowgarage.com/documentation/python/calib3d_camera_calibration_and_3d_reconstruction.html#rodrigues2,三维点的重投影就是

As described here http://opencv.willowgarage.com/documentation/python/calib3d_camera_calibration_and_3d_reconstruction.html#rodrigues2, the reprojection of a 3D-Point is just

P = K * CP * P

p = K * CP * P

(对 - 二维位置,K - 校准矩阵,CP - 摄影机姿态,P - 三维点)

(p - 2D-Position, K - calibration matrix, CP - camera pose, P - 3D-Point)

    Core.gemm(intrinsic, cameraPosition, 1, new Mat(), 0, vec4t);
    Core.gemm(vec4t, point, 1, new Mat(), 0, result);

其结果是远离屏幕边缘的源图像的位置。但我可以找出所有三个边缘由它的相对差异 - 所以它可能是只是一些因素这是错误的。

The result is far away from the source image positions of the screen edges. But i can identify all three edges by its relative differences - so it might be just some factor which is wrong.

它的第一次IM做这样的计算机视觉任务及其可能我做了一些基本错误。我从Zisserman多视图几何的书,我读了所有相关部件 - 但说实话 - 我没有得到大部分...

Its the first time im doing such a Computer Vision task and its possible i did some basically wrong. I have the "Multiple View Geometry" book from Zisserman and i read all related parts - but to be honest - i didn't get most of it...

感谢您的时间和放大器;帮助!

Thanks for your time & help!

更新:

发现一个错误在我的相机矩阵 - 实施上述仅仅是工作的罚款

Found a bug in my camera matrix - the implementation above is just working fine!

推荐答案

得到它的工作的另一种方式。而不是使用findHomography()/ getP erspectiveTransform()我发现所谓的solvePnP()返回相机的另一个函数姿态根据世界和图像点和内在的摄像头矩阵。

Get it to work on another way. Instead of using findHomography()/getP erspectiveTransform() i found another function called solvePnP() which returns the camera pose based on world and images points and an intrinsic camera matrix.

使用该功能结合了projectPoints()方法 - 我是能够重新投影3D点回图像

Using that function in combination with the projectPoints() method - i was able to reproject the 3d points back to the image.

在的情况下,屏幕的边缘有被放置在图像中的正确的点。

In case of the screen edges there are placed on the right spot in the image.

更新:

我发现了一个错误,在我的实现 - 我的相机固有的矩阵是错误的。相机姿势从单应执行以上为我工作!

I found a bug in my implementation - my camera intrinsic matrix was wrong. The camera pose from homography implementation above is working for me!

这篇关于Android的&放大器; OpenCV的:单应到相机姿态考虑相机内部函数和反投影[解决]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆