OpenCV取消投影2D指向深度为Z的3D [英] OpenCV unproject 2D points to 3D with known depth `Z`

查看:91
本文介绍了OpenCV取消投影2D指向深度为Z的3D的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题陈述



我正在尝试将2D点投影到其原始3D坐标上,假设我知道每个点的距离。遵循



进入以下内容:





按:


  1. 使用 cv :: undistortPoints
  2. $ b消除任何失真$ b
  3. 使用内在函数通过反转上面的第二个方程来返回归一化的相机坐标

  4. 乘以 z 即可反转



问题




  1. 为什么我需要减去 f_x f_y 才能返回归一化的相机坐标(在测试时凭经验找到)?在下面的代码中,在第2步中,如果我不减去-即使未失真的结果也关闭了这是我的错误-我弄乱了索引。

  2. 如果我包含变形,结果将是错误的-我在做什么错了?



示例代码(C ++)



  #include< iostream> 
#include< opencv2 / calib3d / calib3d.hpp>
#include< opencv2 / core / core.hpp>
#include< opencv2 / imgproc / imgproc.hpp>
#include< vector>

std :: vector< cv :: Point2d>项目(const std :: vector< cv :: Point3d>&点,
const cv :: Mat&内在的,
const cv :: Mat&失真){
std :: vector< ; cv :: Point2d>结果;
if(!points.empty()){
cv :: projectPoints(points,cv :: Mat(3,1,CV_64F,cvScalar(0。)),
cv :: Mat(3,1,CV_64F,cvScalar(0。)),固有的,
失真,结果);
}
返回结果;
}

std :: vector< cv :: Point3d> Unproject(const std :: vector< cv :: Point2d&&点,
const std :: vector< double>& Z,
const cv :: Mat&内在的,
const cv :: Mat& amp; tion){
double f_x = native.at< double>(0,0);
double f_y = internal.at< double>(1,1);
double c_x = internal.at< double>(0,2);
double c_y = native.at< double>(1,2);
//这之前是一个错误:
// double c_x = native.at< double>(0,3);
// double c_y = native.at< double>(1,3);

//步骤1。取消扭曲
std :: vector< cv :: Point2d> points_undistorted;
assert(Z.size()== 1 || Z.size()== points.size());
if(!points.empty()){
cv :: undistortPoints(点,points_undistorted,固有,
失真,cv :: noArray(),固有);
}

//步骤2。重新投影
std :: vector< cv :: Point3d>结果;
result.reserve(points.size());
for(size_t idx = 0; idx< points_undistorted.size(); ++ idx){
const double z = Z.size()== 1? Z [0]:Z [idx];
result.push_back(
cv :: Point3d((points_undistorted [idx] .x-c_x)/ f_x * z,
(points_undistorted [idx] .y-c_y)/ f_y * z ,z));
}
返回结果;
}

int main(){
const double f_x = 1000.0;
const double f_y = 1000.0;
const double c_x = 1000.0;
const double c_y = 1000.0;
const cv :: Mat本质=
(cv :: Mat_< double(3,3)<< f_x,0.0,c_x,0.0,f_y,c_y,0.0,0.0,1.0) ;
const cv :: Mat畸变=
// //(cv :: Mat_< double>(5,1)<< 0.0,0.0,0.0,0.0); //这有效!
(cv :: Mat_< double(5,1)< -0.32,1.24,0.0013,0.0013); //这不是!

//单点测试。
const cv :: Point3d point_single(-10.0,2.0,12.0);
const cv :: Point2d point_single_projected = Project({point_single},固有的,
失真)[0];
const cv :: Point3d point_single_unprojected = Unproject({point_single_projected},
{point_single.z},固有的,失真的)[0];

std :: cout<< 期望点:<< point_single.x;
std :: cout<< << point_single.y;
std :: cout<< << point_single.z<< std :: endl;
std :: cout<< 计算点:<< point_single_unprojected.x;
std :: cout<< << point_single_unprojected.y;
std :: cout<< << point_single_unprojected.z<< std :: endl;
}



相同代码(Python)



  import cv2 
导入numpy as np

def项目(点,内在,失真):
结果= []
rvec = tvec = np.array([0.0,0.0,0.0])
如果len(points)> 0:
结果,_ = cv2.projectPoints(点,rvec,tvec,
内部,失真)
返回np.squeeze(结果,轴= 1)

def Unproject(点,Z,内部,失真):
f_x =内部[0,0]
f_y =内部[1,1]
c_x =内部[0,2]
c_y =内在[1,2]
#这是在
之前的错误#c_x =内在[0,3]
#c_y =内在[1,3]

#步骤1.不失真。
points_undistorted = np.array([])如果len(points)>
0:
points_undistorted = cv2.undistortPoints(np.expand_dims(points,axis = 1),固有的,畸变的,P =本征的)
points_undistorted = np.squeeze(points_undistorted,axis = 1)

#步骤2.重新投影。
结果= []
对于范围内的idx(points_undistorted.shape [0]):
z = Z [0]如果len(Z)== 1否则Z [idx]
x =(points_undistorted [idx,0]-c_x)/ f_x * z
y =(points_undistorted [idx,1]-c_y)/ f_y * z
result.append([x,y,z] )
返回结果

f_x = 1000.
f_y = 1000.
c_x = 1000.
c_y = 1000.

内部= np.array([
[f_x,0.0,c_x],
[0.0,f_y,c_y],
[0.0,0.0,1.0]
])

失真= np.array([0.0,0.0,0.0,0.0])#可以工作!
失真= np.array([-0.32,1.24,0.0013,0.0013])#这不是!

point_single = np.array([[-10.0,2.0,12.0],])
point_single_projected = Project(point_single,固有,失真)
Z = np.array( [point_single中的点的point [2]])
point_single_unprojected = Unproject(point_single_projected,
Z,
内部,失真)
打印期望点:,point_single [0]
打印计算点:,point_single_unprojected [0]

零失真的结果(如上所述)是正确的:

 预期点:-10 2 12 
计算点:-10 2 12

但是当包含失真时,结果为关闭:

 期望点:-10 2 12 
计算点:-4.26634 0.848872 12



更新1.澄清



这是一个用于图像投影的相机-我假设3D点在相机中-帧



更新2。弄清楚第一个问题



好,我想出了 f_x f_y -我很愚蠢,无法弄乱索引。更新了代码以进行更正。另一个问题仍然成立。



更新3.添加了Python等效代码



要增加可见性,请添加Python代码,因为它具有相同的错误。

解决方案

问题2的答案



我发现了问题所在- 3D点坐标很重要!我以为无论选择什么3D坐标点,重建都可以解决。但是,我注意到了一个奇怪的事情:使用一系列3D点时,这些点中只有一部分被正确地重建。经过进一步调查,我发现只有在摄像机视场内的图像才能被正确地重建。视野是内在参数的函数(反之亦然)。



要使上述代码正常工作,请尝试如下设置参数(本征

  ... 
const double f_x = 2746。
const double f_y = 2748。
const double c_x = 991。
const double c_y = 619。
...
const cv :: Point3d point_single(10.0,-2.0,30.0);
...

此外,别忘了在相机坐标中为负 y 坐标为 UP :)



问题1的答案:



有一个错误,我尝试使用

  ... 
double f_x = native.at< double>(0,0);
double f_y = internal.at< double>(1,1);
double c_x = internal.at< double>(0,3);
double c_y = internal.at< double>(1,3);
...

但是内在 3x3 矩阵。



故事的道德感
编写单元测试!!!


Problem statement

I am trying to reproject 2D points to their original 3D coordinates, assuming I know the distance at which each point is. Following the OpenCV documentation, I managed to get it to work with zero-distortions. However, when there are distortions, the result is not correct.

Current approach

So, the idea is to reverse the following:

into the following:

By:

  1. Geting rid of any distortions using cv::undistortPoints
  2. Use intrinsics to get back to the normalized camera coordinates by reversing the second equation above
  3. Multiplying by z to reverse the normalization.

Questions

  1. Why do I need to subtract f_x and f_y to get back to the normalized camera coordinates (found empirically when testing)? In the code below, in step 2, if I don't subtract -- even the non-distorted result is off This was my mistake -- I messed up the indexes.
  2. If I include the distortion, the result is wrong -- what am I doing wrong?

Sample code (C++)

#include <iostream>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>

std::vector<cv::Point2d> Project(const std::vector<cv::Point3d>& points,
                                 const cv::Mat& intrinsic,
                                 const cv::Mat& distortion) {
  std::vector<cv::Point2d> result;
  if (!points.empty()) {
    cv::projectPoints(points, cv::Mat(3, 1, CV_64F, cvScalar(0.)),
                      cv::Mat(3, 1, CV_64F, cvScalar(0.)), intrinsic,
                      distortion, result);
  }
  return result;
}

std::vector<cv::Point3d> Unproject(const std::vector<cv::Point2d>& points,
                                   const std::vector<double>& Z,
                                   const cv::Mat& intrinsic,
                                   const cv::Mat& distortion) {
  double f_x = intrinsic.at<double>(0, 0);
  double f_y = intrinsic.at<double>(1, 1);
  double c_x = intrinsic.at<double>(0, 2);
  double c_y = intrinsic.at<double>(1, 2);
  // This was an error before:
  // double c_x = intrinsic.at<double>(0, 3);
  // double c_y = intrinsic.at<double>(1, 3);

  // Step 1. Undistort
  std::vector<cv::Point2d> points_undistorted;
  assert(Z.size() == 1 || Z.size() == points.size());
  if (!points.empty()) {
    cv::undistortPoints(points, points_undistorted, intrinsic,
                        distortion, cv::noArray(), intrinsic);
  }

  // Step 2. Reproject
  std::vector<cv::Point3d> result;
  result.reserve(points.size());
  for (size_t idx = 0; idx < points_undistorted.size(); ++idx) {
    const double z = Z.size() == 1 ? Z[0] : Z[idx];
    result.push_back(
        cv::Point3d((points_undistorted[idx].x - c_x) / f_x * z,
                    (points_undistorted[idx].y - c_y) / f_y * z, z));
  }
  return result;
}

int main() {
  const double f_x = 1000.0;
  const double f_y = 1000.0;
  const double c_x = 1000.0;
  const double c_y = 1000.0;
  const cv::Mat intrinsic =
      (cv::Mat_<double>(3, 3) << f_x, 0.0, c_x, 0.0, f_y, c_y, 0.0, 0.0, 1.0);
  const cv::Mat distortion =
      // (cv::Mat_<double>(5, 1) << 0.0, 0.0, 0.0, 0.0);  // This works!
      (cv::Mat_<double>(5, 1) << -0.32, 1.24, 0.0013, 0.0013);  // This doesn't!

  // Single point test.
  const cv::Point3d point_single(-10.0, 2.0, 12.0);
  const cv::Point2d point_single_projected = Project({point_single}, intrinsic,
                                                     distortion)[0];
  const cv::Point3d point_single_unprojected = Unproject({point_single_projected},
                                    {point_single.z}, intrinsic, distortion)[0];

  std::cout << "Expected Point: " << point_single.x;
  std::cout << " " << point_single.y;
  std::cout << " " << point_single.z << std::endl;
  std::cout << "Computed Point: " << point_single_unprojected.x;
  std::cout << " " << point_single_unprojected.y;
  std::cout << " " << point_single_unprojected.z << std::endl;
}

Same Code (Python)

import cv2
import numpy as np

def Project(points, intrinsic, distortion):
  result = []
  rvec = tvec = np.array([0.0, 0.0, 0.0])
  if len(points) > 0:
    result, _ = cv2.projectPoints(points, rvec, tvec,
                                  intrinsic, distortion)
  return np.squeeze(result, axis=1)

def Unproject(points, Z, intrinsic, distortion):
  f_x = intrinsic[0, 0]
  f_y = intrinsic[1, 1]
  c_x = intrinsic[0, 2]
  c_y = intrinsic[1, 2]
  # This was an error before
  # c_x = intrinsic[0, 3]
  # c_y = intrinsic[1, 3]

  # Step 1. Undistort.
  points_undistorted = np.array([])
  if len(points) > 0:
    points_undistorted = cv2.undistortPoints(np.expand_dims(points, axis=1), intrinsic, distortion, P=intrinsic)
  points_undistorted = np.squeeze(points_undistorted, axis=1)

  # Step 2. Reproject.
  result = []
  for idx in range(points_undistorted.shape[0]):
    z = Z[0] if len(Z) == 1 else Z[idx]
    x = (points_undistorted[idx, 0] - c_x) / f_x * z
    y = (points_undistorted[idx, 1] - c_y) / f_y * z
    result.append([x, y, z])
  return result

f_x = 1000.
f_y = 1000.
c_x = 1000.
c_y = 1000.

intrinsic = np.array([
  [f_x, 0.0, c_x],
  [0.0, f_y, c_y],
  [0.0, 0.0, 1.0]
])

distortion = np.array([0.0, 0.0, 0.0, 0.0])  # This works!
distortion = np.array([-0.32, 1.24, 0.0013, 0.0013])  # This doesn't!

point_single = np.array([[-10.0, 2.0, 12.0],])
point_single_projected = Project(point_single, intrinsic, distortion)
Z = np.array([point[2] for point in point_single])
point_single_unprojected = Unproject(point_single_projected,
                                     Z,
                                     intrinsic, distortion)
print "Expected point:", point_single[0]
print "Computed point:", point_single_unprojected[0]

The results for zero-distortion (as mentioned) are correct:

Expected Point: -10 2 12
Computed Point: -10 2 12

But when the distortions are included, the result is off:

Expected Point: -10 2 12
Computed Point: -4.26634 0.848872 12

Update 1. Clarification

This is a camera to image projection -- I am assuming the 3D points are in the camera-frame coordinates.

Update 2. Figured out the first question

OK, I figure out the subtraction of the f_x and f_y -- I was stupid enough to mess up the indexes. Updated the code to correct. The other question still holds.

Update 3. Added Python equivalent code

To increase visibility, adding the Python codes, because it has the same error.

解决方案

Answer to Question 2

I found what the problem was -- The 3D point coordinates matter! I assumed that no matter what 3D coordinate points I choose, the reconstruction would take care of it. However, I noticed something strange: when using a range of 3D points, only a subset of those points were reconstructed correctly. After further investigation, I found out that only the images that are in the field of view of the camera would be properly reconstructed. The field-of-view is the function of the intrinsic parameters (and vice-versa).

For the above codes to work, try setting the parameters as follows (intrinsics are from my camera):

...
const double f_x = 2746.;
const double f_y = 2748.;
const double c_x = 991.;
const double c_y = 619.;
...
const cv::Point3d point_single(10.0, -2.0, 30.0);
...

Also, don't forget that in camera coordinates negative y coordinates is UP :)

Answer to Question 1:

There was a bug where I was trying to access the intrinsics using

...
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 3);
double c_y = intrinsic.at<double>(1, 3);
...

But intrinsic was a 3x3 matrix.

Moral of the story Write unit tests!!!

这篇关于OpenCV取消投影2D指向深度为Z的3D的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆