准确测量一组基准点之间的相对距离(增强现实应用) [英] Accurately measuring relative distance between a set of fiducials (Augmented reality application)

查看:25
本文介绍了准确测量一组基准点之间的相对距离(增强现实应用)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一组 5 个标记.我正在尝试使用增强现实框架(例如 方法,找到最适合所有数据的转换.如果您只需要标记之间的距离,这只是测量距离的平均值.

假设您的标记位置是固定的(例如,固定的刚体),并且您想要它们的相对位置,那么您可以简单地记录它们的位置并对其求平均值.如果有可能将一个标记与另一个混淆,您可以逐帧跟踪它们,并使用每个标记位置在其两个周期之间的连续性来确认其身份.

如果您希望刚体移动(或者如果刚体不是刚体,等等),那么您的问题将变得更加困难.一次两个标记不足以固定刚体(需要三个)的位置.但是,请注意,在每次转换时,您几乎同时拥有旧标记、新标记和连续标记的位置.如果您已经为每个标记在身体上确定了预期位置,那么这应该可以很好地估计每 20 帧的刚性姿势.

通常,如果您的身体在移动,则最佳性能将需要某种动态模型,该模型应用于随时间跟踪其姿势.给定一个动态模型,您可以使用卡尔曼滤波器进行跟踪;卡尔曼滤波器非常适合集成您描述的数据类型.

通过将标记的位置作为卡尔曼状态向量的一部分,您或许能够从纯粹的传感器数据(这似乎是您的目标)中推断出它们的相对位置,而不是先验地要求这些信息.如果您希望能够有效地处理任意数量的标记,您可能需要对常用方法进行一些巧妙的修改;您的问题似乎旨在避免通过顺序卡尔曼滤波等传统分解方法解决.

<小时>

编辑,根据以下评论:

如果您的标记产生完整的 3D 姿势(而不仅仅是 3D 位置),附加数据将使您更容易维护有关您正在跟踪的对象的准确信息.但是,上述建议仍然适用:

  • 如果标记的主体是固定的,请使用所有相关帧数据的最小二乘拟合.
  • 如果标记的物体正在移动,请对其动力学建模并使用卡尔曼滤波器.

想到的新观点:

  • 尝试管理一系列相对转换可能不是解决问题的最佳方法;正如您所注意到的,它很容易出现累积错误.但是,这也不一定是坏方法,只要您可以在该框架中实现必要的数学运算即可.
  • 特别是,最小二乘拟合应该非常适合一连串或一圈相对姿势.
  • 在任何情况下,对于最小二乘拟合或卡尔曼滤波器跟踪,对测量不确定度的正确估计都会提高性能.

Let's say I have a set of 5 markers. I am trying to find the relative distances between each marker using an augmented reality framework such as ARToolkit. In my camera feed thee first 20 frames show me the first 2 markers only so I can work out the transformation between the 2 markers. The second 20 frames show me the 2nd and 3rd markers only and so on. The last 20 frames show me the 5th and 1st markers. I want to build up a 3D map of the marker positions of all 5 markers.

My question is, knowing that there will be inaccuracies with the distances due to low quality of the video feed, how do I minimise the inaccuracies given all the information I have gathered?

My naive approach would be to use the first marker as a base point, from the first 20 frames take the mean of the transformations and place the 2nd marker and so forth for the 3rd and 4th. For the 5th marker place it inbetween the 4th and 1st by placing it in the middle of the mean of the transformations between the 5th and 1st and the 4th and 5th. This approach I feel has a bias towards the first marker placement though and doesn't take into account the camera seeing more than 2 markers per frame.

Ultimately I want my system to be able to work out the map of x number of markers. In any given frame up to x markers can appear and there are non-systemic errors due to the image quality.

Any help regarding the correct approach to this problem would be greatly appreciated.

Edit: More information regarding the problem:

Lets say the realworld map is as follows:

Lets say I get 100 readings for each of the transformations between the points as represented by the arrows in the image. The real values are written above the arrows.

The values I obtain have some error (assumed to follow a gaussian distribution about the actual value). For instance one of the readings obtained for marker 1 to 2 could be x:9.8 y:0.09. Given I have all these readings how do I estimate the map. The result should ideally be as close to the real values as possible.

My naive approach has the following problem. If the average of the transforms from 1 to 2 is slightly off the placement of 3 can be off even though the reading of 2 to 3 is very accurate. This problem is shown below:

The greens are the actual values, the blacks are the calculated values. The average transform of 1 to 2 is x:10 y:2.

解决方案

You can use a least-squares method, to find the transformation that gives the best fit to all your data. If all you want is the distance between the markers, this is just the average of the distances measured.

Assuming that your marker positions are fixed (e.g., to a fixed rigid body), and you want their relative position, then you can simply record their positions and average them. If there is a potential for confusing one marker with another, you can track them from frame to frame, and use the continuity of each marker location between its two periods to confirm its identity.

If you expect your rigid body to be moving (or if the body is not rigid, and so forth), then your problem is significantly harder. Two markers at a time is not sufficient to fix the position of a rigid body (which requires three). However, note that, at each transition, you have the location of the old marker, the new marker, and the continuous marker, at almost the same time. If you already have an expected location on the body for each of your markers, this should provide a good estimate of a rigid pose every 20 frames.

In general, if your body is moving, best performance will require some kind of model for its dynamics, which should be used to track its pose over time. Given a dynamic model, you can use a Kalman filter to do the tracking; Kalman filters are well-adapted to integrating the kind of data you describe.

By including the locations of your markers as part of the Kalman state vector, you may be able to be able to deduce their relative locations from purely sensor data (which appears to be your goal), rather than requiring this information a priori. If you want to be able to handle an arbitrary number of markers efficiently, you may need to come up with some clever mutation of the usual methods; your problem seems designed to avoid solution by conventional decomposition methods such as sequential Kalman filtering.


Edit, as per the comments below:

If your markers yield a full 3D pose (instead of just a 3D position), the additional data will make it easier to maintain accurate information about the object you are tracking. However, the recommendations above still apply:

  • If the labeled body is fixed, use a least-squares fit of all relevant frame data.
  • If the labeled body is moving, model its dynamics and use a Kalman filter.

New points that come to mind:

  • Trying to manage a chain of relative transformations may not be the best way to approach the problem; as you note, it is prone to accumulated error. However, it is not necessarily a bad way, either, as long as you can implement the necessary math in that framework.
  • In particular, a least-squares fit should work perfectly well with a chain or ring of relative poses.
  • In any case, for either a least-squares fit or for Kalman filter tracking, a good estimate of the uncertainty of your measurements will improve performance.

这篇关于准确测量一组基准点之间的相对距离(增强现实应用)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆