在OpenCV中跟踪连续帧中的要素 [英] Keeping track of features in successive frames in OpenCV

查看:134
本文介绍了在OpenCV中跟踪连续帧中的要素的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我编写了一个程序,它使用goodFeaturesToTrack和calcOpticalFlowPyrLK来逐帧跟踪要素。该程序可靠地工作,并且可以从前一帧估计Android相机上的预览图像中的光流。以下是一些描述一般过程的代码段:

  goodFeaturesToTrack(grayFrame,corner,MAX_CORNERS,quality_level,
min_distance,cv :: noArray(),eig_block_size,use_harris,0.06);

...

  if(first_time == true){
first_time = false;
old_corners = corners;
safe_corners = corners;
mLastImage = grayFrame;

} else {

if(old_corners.size()> 0&&& corner.size()> 0){

safe_corners = corner;
calcOpticalFlowPyrLK(mLastImage,grayFrame,old_corners,corner,
status,error,Size(21,21),5,
TermCriteria(TermCriteria :: COUNT + TermCriteria :: EPS,30,
0.01));
} else {
//没有找到功能,让我们重新开始吧。
first_time = true;
}

}

再次在一个循环中,其中在每次迭代中抓取新的预览帧。 Safe_corners,old_corners和corner都是类向量的数组, Point2f>。上面的代码非常好。



现在,对于我已经识别的每个功能,我想能够分配一些关于功能的信息... number发现的,可能是功能的描述符,谁知道...我的第一个方法是这样做:

  :public Point2f {
private:
//关于我要跟踪的功能的事情
public:
// getters和fetchers,当然还有:
Feature ){
Point2f();
}
Feature(float a,float b){
Point2f(a,b);
}
}



接下来,所有的outputArrays都从vector& ; Point2f>特性>在我自己的扭曲的世界应该工作,因为Feature被定义为Point2f的派生类。应用多态性,我不能想象任何好的理由为什么这应该捅我,除非我做了别的可怕的错误。



这是我得到的错误消息。

void bv :: Mat :: convertTo(cv :: OutputArray,int,double,double)const,file / home /所以,我的问题到论坛是做OpenCV函数真正的(真正的),因为OpenCV的功能是真正的。需要一个Point2f向量或者一个后代类的Point2f工作是否正好?下一步是让gdb在Android手机上使用移动代码,并更准确地查看崩溃的位置,但是如果我的方法根本上有缺陷,我不想走下去。



或者,如果使用上述方法跨多个帧跟踪要素,每个点的内存地址是否会改变?



提前感谢。

解决方案

> YES ,OpenCV函数需要 std :: vector< cv :: Point2f> 作为参数。



请注意,向量包含 cv :: Point2f 对象本身,而不是指针 cv :: Point2f c> c> c $ c> cv :: Point2f 可能不是一个理想的解决方案。在这种情况下使用组合将更简单,更不用说建模正确的关系( Feature has-a cv :: Point2f )。



依赖对象在内存中的位置也可能不是一个好主意。而是阅读您选择的数据结构。


I have written a program that uses goodFeaturesToTrack and calcOpticalFlowPyrLK to track features from frame to frame. The program reliably works and can estimate the optical flow in the preview image on an Android camera from the previous frame. Here's some snippets that describe the general process:

goodFeaturesToTrack(grayFrame, corners, MAX_CORNERS, quality_level,
        min_distance, cv::noArray(), eig_block_size, use_harris, 0.06);

...

if (first_time == true) {
    first_time = false;
    old_corners = corners;
    safe_corners = corners;
    mLastImage = grayFrame;

} else {

    if (old_corners.size() > 0 && corners.size() > 0) {

        safe_corners = corners;
        calcOpticalFlowPyrLK(mLastImage, grayFrame, old_corners, corners,
                status, error, Size(21, 21), 5,
                TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30,
                        0.01));
    } else {
        //no features found, so let's start over.
        first_time = true;
    }

}

The code above runs over and over again in a loop where a new preview frame is grabbed at each iteration. Safe_corners, old_corners, and corners are all arrays of class vector < Point2f > . The above code works great.

Now, for each feature that I've identified, I'd like to be able to assign some information about the feature... number of times found, maybe a descriptor of the feature, who knows... My first approach to doing this was:

class Feature: public Point2f {
private:
  //things about a feature that I want to track
public: 
  //getters and fetchers and of course:
  Feature() {
    Point2f();
  }
  Feature(float a, float b) { 
    Point2f(a,b);
  }
}

Next, all of my outputArrays are changed from vector < Point2f > to vector < Feature > which in my own twisted world ought to work because Feature is defined to be a descendent class of Point2f. Polymorphism applied, I can't imagine any good reason why this should puke on me unless I did something else horribly wrong.

Here's the error message I get.

OpenCV Error: Assertion failed (func != 0) in void cv::Mat::convertTo(cv::OutputArray, int, double, double) const, file /home/reports/ci/slave50-SDK/opencv/modules/core/src/convert.cpp, line 1095

So, my question to the forum is do the OpenCV functions truly require a Point2f vector or will a descendant class of Point2f work just as well? Next step would be to get gdb working with mobile code on an the Android phone and seeing more precisely where it crashes, however I don't want to go down that road if my approach is fundamentally flawed.

Alternatively, if a feature is tracked across multiple frames using the approach above, does the address in memory for each point change?

Thanks in advance.

解决方案

The short answer is YES, OpenCV functions do require std::vector<cv::Point2f> as arguments.

Note that the vectors contain cv::Point2f objects themselves, not pointers to cv::Point2f, so there is no polymorphic behavior.

Additionally, having your Feature inherit from cv::Point2f is probably not an ideal solution. It would be simpler to use composition in this case, not to mention modeling the correct relationship (Feature has-a cv::Point2f).

Relying on an object's location in memory is also probably not a good idea. Rather, read up on your data structure of choice.

这篇关于在OpenCV中跟踪连续帧中的要素的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆