投射Tango 3D指向屏幕Google Project Tango [英] projecting Tango 3D point to screen Google Project Tango

查看:297
本文介绍了投射Tango 3D指向屏幕Google Project Tango的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Ptoject Tango提供了一个点云,如何以米为单位获得点云中3D点的像素位置?



我尝试使用投影矩阵但我得到非常小的值(0.5,1.3等)而不是说1234,324(以像素为单位)。



我包含我尝试过的代码

  //获取当前的旋转矩阵
Matrix4 projMatrix = mRenderer.getCurrentCamera()。getProjectionMatrix();



//获取pointcloud中的所有点并将它们存储为3D点
FloatBuffer pointsBuffer = mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer()。floatBuffer;
Vector3 [] points3D = new Vector3 [pointsBuffer.capacity()/ 3];

int j = 0;
for(int i = 0; i< pointsBuffer.capacity() - 3; i = i + 3){

points3D [j] = new Vector3(
pointsBuffer .get(i),
pointsBuffer.get(i + 1),
pointsBuffer.get(i + 2));
//Log.v(\"Points3d,J:+ j +X:+ points3D [j] .x +\ tY:+ points3D [j] .y +\ tZ:+ points3D [j] .z);
j ++;
}


//获取屏幕中各点的投影。
Vector3 [] points2D = new Vector3 [points3D.length];
for(int i = 0; i< points3D.length-1; i ++)
{
Log.v(Points,X:+ points3D [i] .x +\tY:+ points3D [i] .y +\ tZ:+ points3D [i] .z);
points2D [i] = points3D [i] .multiply(projMatrix);
Log.v(Points,pX:+ points2D [i] .x +\ tpY:+ points2D [i] .y +\ tpZ:+ points2D [i]。 z);
}

我使用的例子是点云java,可以在这里找到
函数




Ptoject Tango provides a point cloud, how can you get the position in pixels of a 3D point in the point cloud in meters?

I tried using the projection matrix but I get very small values (0.5,1.3 etc) instead of say 1234,324 (in pixels).

I include the code I have tried

    //Get the current rotation matrix
    Matrix4 projMatrix =  mRenderer.getCurrentCamera().getProjectionMatrix();



    //Get all the points in the pointcloud and store them as 3D points
    FloatBuffer pointsBuffer =  mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer;
    Vector3[] points3D = new Vector3[pointsBuffer.capacity()/3];

    int j =0;
    for (int i = 0; i < pointsBuffer.capacity() - 3; i = i + 3) {

        points3D[j]= new Vector3(
                pointsBuffer.get(i),
                pointsBuffer.get(i+1),
                pointsBuffer.get(i+2));
        //Log.v("Points3d", "J: "+ j + " X: " +points3D[j].x + "\tY: "+ points3D[j].y +"\tZ: "+ points3D[j].z );
        j++;
    }


    //Get the projection of the points in the screen.
    Vector3[] points2D = new Vector3[points3D.length];
    for(int i =0; i < points3D.length-1;i++)
    {
        Log.v("Points", "X: " +points3D[i].x + "\tY: "+ points3D[i].y +"\tZ: "+ points3D[i].z );
        points2D[i] = points3D[i].multiply(projMatrix);
        Log.v("Points", "pX: " +points2D[i].x + "\tpY: "+ points2D[i].y +"\tpZ: "+ points2D[i].z );
    }

The example I'm using is the point cloud java which can be found here https://github.com/googlesamples/tango-examples-java


UPDATE

TangoCameraIntrinsics ccIntrinsics = mTango.getCameraIntrinsics(TangoCameraIntrinsics.TANGO_CAMERA_COLOR);
    double fx = ccIntrinsics.fx;
    double fy = ccIntrinsics.fy;
    double cx = ccIntrinsics.cx;
    double cy = ccIntrinsics.cy;

    double[][] projMatrix = new double[][] {
            {fx, 0 , -cx},
            {0,  fy, -cy},
            {0,  0,    1}
    };

Then to compute the projected point I use

for(int i =0; i < points3D.length-1;i++)
    {

        double[][] point = new double[][] {
                {points3D[i].x},
                {points3D[i].y},
                {points3D[i].z}
        };

        double [][] point2d = CustomMatrix.multiplyByMatrix(projMatrix, point);

        points2D[i] = new Vector2(0,0);
        if(point2d[2][0]!=0)
        {
            Log.v("temp point", "pX: " +point2d[0][0]/point2d[2][0]+" pY: " +point2d[1][0]/point2d[2][0] );
            points2D[i] = new Vector2(point2d[0][0]/point2d[2][0],point2d[1][0]/point2d[2][0]);
        }

    }

But I think that the results are still not what is expected, I for instance get results like:

pX: -175.58042313027244 pY: -92.573740812066

Which to me looks not right.


UPDATE Using color camera as suggested gives better results, but poitns are still negative -1127.8086915171814 pY: -652.5887102192332

Would it be ok to just multiply them by -1?

解决方案

You have to multiply 3D point with RGB camera's intrinsics matrix to obtain pixel coordinate. 3D points are in Depthcamera's frame. You get pixel coordinates by following method:

and

x and y are pixel coordinates. And K is constructed with parameters using intrinsics function

这篇关于投射Tango 3D指向屏幕Google Project Tango的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆