如何使用OpenCV Viz和ARUCO为增强现实应用程序转换3D模型 [英] How to transform a 3D model for Augmented Reality application using OpenCV Viz and ARUCO

查看:260
本文介绍了如何使用OpenCV Viz和ARUCO为增强现实应用程序转换3D模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 OpenCV Viz ARUCO 开发一个基于标记的简单增强现实应用程序.我只想在标记上可视化3D对象(PLY格式).

I'm developing a simple marker based augmented reality application with OpenCV Viz and ARUCO. I just want to visualize a 3D object (in PLY format) on a marker.

我可以使用ARUCO进行标记检测和姿态估计(返回旋转和平移矢量),而不会出现问题.而且我可以在Viz窗口中可视化任何3D对象(PLY格式)和相机框架.但是,我坚持使用ARUCO的旋转和平移矢量输出在标记上定位3D模型.

I can run marker detection and pose estimation (returning rotation and translation vectors) with ARUCO without a problem. And I can visualize any 3D object (PLY format) and camera frames in Viz window. However, I stuck in using rotation and translation vector outputs from ARUCO to localize 3D model on the marker.

我正在创建带有旋转和平移矢量的仿射变换,并将其应用于3D模型.这是正确的吗? 我应如何使用平移和旋转矢量?

I'm creating an affine transformation with rotation and translation vectors and applying it to 3D model. Is it correct? How should I make use of translation and rotation vectors?

下面是我的代码段.

// Camera calibration outputs
cv::Mat cameraMatrix, distCoeffs;
loadIntrinsicCameraParameters(cameraMatrix, distCoeffs);

// Marker dictionary
Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);

viz::Viz3d myWindow("Coordinate Frame");

cv::Mat image;

// Webcam frame pose, without this frame is upside-down
Affine3f imagePose(Vec3f(3.14159,0,0), Vec3f(0,0,0));

// Viz viewer pose to see whole webcam frame
Vec3f cam_pos( 0.0f,0.0f,900.0f), cam_focal_point(0.0f,0.0f,0.0f), cam_y_dir(0.0f,0.0f,0.0f);
Affine3f viewerPose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);

// Video capture from source
VideoCapture cap(camID);
int frame_width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cap >> image;

// Load mash data
viz::WMesh batman(viz::Mesh::load("../data/bat.ply"));
viz::WImage3D img(image, Size2d(frame_width, frame_height));

// Show camera frame, mesh and a coordinate widget (for debugging)
myWindow.showWidget("Image", img);
myWindow.showWidget("Batman", batman);
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem(5.0));

myWindow.setFullScreen(true);
myWindow.setViewerPose(viewerPose);

// Rotation vector of 3D model
Mat rot_vec = Mat::zeros(1,3,CV_32F);
cv::Vec3d rvec, tvec;

// ARUCO outputs
float roll, pitch, yaw;
float x, y, z;

while (!myWindow.wasStopped()) {

    if (cap.read(image)) {
        cv::Mat image, imageCopy;
        cap.retrieve(image);
        image.copyTo(imageCopy);

        // Marker detection
        std::vector<int> ids;
        std::vector<std::vector<cv::Point2f> > corners;
        cv::aruco::detectMarkers(image, dictionary, corners, ids);

        if (ids.size() > 0){

            // Draw a green line around markers
            cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
            vector<Vec3d> rvecs, tvecs;

            // Get rotation and translation vectors of each markers
            cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);

            for(int i=0; i<ids.size(); i++){

                cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1);

                // Take only the first marker's rotation and translation to visualize 3D model on this marker
                rvec = rvecs[0];
                tvec = tvecs[0];

                roll = rvec[0];
                pitch = rvec[1];
                yaw = rvec[2];

                x = tvec[0];
                y = tvec[1];
                z = tvec[2];

                qDebug() << rvec[0] << "," << rvec[1] << "," << rvec[2] << "---" << tvec[0] << "," << tvec[1] << "," << tvec[2];
            }
        }
        // Show camera frame in Viz window
        img.setImage(imageCopy);
        img.setPose(imagePose);
    }

    // Create affine pose from rotation and translation vectors
    rot_vec.at<float>(0,0) = roll;
    rot_vec.at<float>(0,1) = pitch;
    rot_vec.at<float>(0,2) = yaw;

    Mat rot_mat;

    Rodrigues(rot_vec, rot_mat);

    Affine3f pose(rot_mat, Vec3f(x, y, z));

    // Set the pose of 3D model
    batman.setPose(pose);

    myWindow.spinOnce(1, true);
}

推荐答案

我认为您的问题是rvec不是侧倾角和偏航角而是Rodriguez向量.要查找侧倾角和偏航角,您必须使用Opencv的Rodriguez函数将旋转矢量(rvec)转换为旋转矩阵.然后使用RQDecomp3x3将旋转矩阵分解为欧拉角(侧倾角和偏航角)

I think your problem is that rvec it is not the roll pitch and yaw angles but rodriguez vector. To find roll pitch and yaw angles you have to use Rodriguez function of Opencv to transform rotation vector (rvec) to the rotation matrix. Then use RQDecomp3x3 to decompose rotation matrix to euler angles (roll pitch and yaw )

这篇关于如何使用OpenCV Viz和ARUCO为增强现实应用程序转换3D模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆