使用光流的特征跟踪 [英] Feature tracking using optical flow

查看:139
本文介绍了使用光流的特征跟踪的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在论坛中发现了类似问题


  • 如果我做特征检测(goodFeaturesToTrack)第一个图像,然后使用光流(calcOpticalFlowPyrLK)跟踪这些功能,问题是:只有在第一个图像上检测到的功能才能被跟踪。当这些特征超越图像时,将没有要跟踪的特征。

  • If I do feature detection (goodFeaturesToTrack) only once on the first image, and then use optical flow (calcOpticalFlowPyrLK) to track these features, the problem is: only the features detected on the first image can be tracked. When these features go beyond of the image, there would be no features to track.

如果我对每个新图片进行功能检测,功能跟踪不稳定,因为上次检测的功能可能不

If I do feature detection for every new image, the feature tracking is not stable, because the feature detected last time may not be detected this time.

我使用光流进行3D重建。所以我不想跟踪什么功能,相反,我只关心视野中的功能是否可以稳定跟踪。总而言之,我的问题是:如何使用光流来跟踪旧功能,同时添加新的图像功能进入视野,并删除超出领域的旧功能视图?

I am using optical flow for 3D reconstruction. So I'm not interested in tracking what features, instead, I only care whether features in the field of view can be tracked stably. To summarize, my question is: how can I use optical flow to track old features, and in the meantime add new image features that come into the field of view and remove old features that go beyond the field of view?

推荐答案

有几种方法是可能的。一个好的方法是这样的:

Several approaches are possible. A good method goes like this:


  1. 在框架1检测N个特征,这是关键帧 m = 1

  2. 如果成功跟踪的功能数下降到N / 2以下,则在帧k中跟踪光流的特征



    • 此框架是关键帧 m + 1

    • 计算单应性或描述关键帧 m和m + 1

    • 检测N个地图项并舍弃旧地图项

    • k:= k + 1 2

  1. in Frame 1 detect N features, this is the Keyframe m=1
  2. in Frame k track the features by optical flow
  3. in Frame k if the number of successfully tracked features drops under N/2:
    • this frame is the keyframe m+1
    • compute the homography or the fundamental matrix describing the motion between the keyframes m and m+1
    • detect N features and discard the old ones
    • k := k+1 go to 2


由于您没有提及用于3D重建的方法,我假设 H F 首先计算到估计的运动。为了准确估计关键帧之间的基线,应尽可能宽。一般来说,最好的策略是考虑相机的粗略运动模型。如果手持照相机,则与当照相机固定在汽车或机器人的顶部时相比,应当使用不同的策略。
我可以在Python中提供一个最小的工作示例,如果这有帮助,让我知道。

Since you didn't mention what approach is used for 3D reconstruction I assumed either H or F are computed to estimated motion first. To estimate them accurately the baseline between the keyframes should be as wide as possible. In general, the best strategy is take into account the rough motion model of the camera. If the camera is held by hand a different strategy should be used compared to when the camera is fixed on the top of a car or a robot. I can provide a minimal working example in Python if that helps, let me know.

这篇关于使用光流的特征跟踪的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆