在iPad上使用OpenCV避免碰撞 [英] Collision Avoidance using OpenCV on iPad

查看:459
本文介绍了在iPad上使用OpenCV避免碰撞的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发一个项目,我需要使用OpenCV实现冲突避免。这将在iOS(iOS 5及以上版本)上完成。



项目目标:
这个想法是在汽车仪表板上安装一个iPad启动应用程序。应用程序应该从相机抓取帧并处理这些帧以检测汽车是否会与任何障碍物碰撞。



我是任何类型的图像处理的新手



到目前为止我做了什么:




  • 看看OpenCV并在网上阅读它。碰撞避免使用Lukas-Kanade金字塔方法实现。这是否正确?

  • 使用此项目作为起点: http://aptogo.co.uk/2011/09/opencv-framework-for-ios/ 它成功运行在我的iPad和捕获功能,以及意味着摄像机捕获是完全集成的。我改变了processFrame实现来尝试光流而不是Canny边缘检测。这是函数(还不完整)。

       - (void)processFrame {
    int currSliderVal = self.lowSlider。值;
    if(_prevSliderVal == currSliderVal)return;
    cv :: Mat greyFramePrev,grayFrameLast,prevCorners,lastCorners,status,err;

    //将捕获的帧转换为灰度的_prevFrame
    cv :: cvtColor(_prevFrame,grayFramePrev,cv :: COLOR_RGB2GRAY);
    cv :: goodFeaturesToTrack(grayFramePrev,prevCorners,500,0.01,10);
    //将捕获的帧转换为灰度_lastFrame
    cv :: cvtColor(_lastFrame,grayFrameLast,cv :: COLOR_RGB2GRAY);
    cv :: goodFeaturesToTrack(grayFrameLast,lastCorners,500,0.01,10);

    cv :: calcOpticalFlowPyrLK(_prevFrame,_lastFrame,prevCorners,lastCorners,status,err);
    self.imageView.image = [UIImage imageWithCVMat:lastCorners];
    _prevSliderVal = self.lowSlider.value;阅读关于光流以及如何使用(概念上)来检测即将发生的碰撞的详细信息(
    }


  • < 。摘要:如果对象在大小上增长,但是向框架的任何边缘移动,则它不是冲突路径。如果一个对象在大小上增长,但不向任何边缘移动,则它在碰撞路径上。这是正确的吗?
  • 这个专案(http://se.cs.ait.ac.th/cvwiki/opencv:tutorial:optical_flow)似乎在做我想达到的目标。但我不明白它是如何这样做通过阅读代码。我不能运行它,因为我没有一个Linux盒子。我读了这个网页上的解释,似乎到达一个同形矩阵。这个结果如何用于避免碰撞?



除了上面提到的四点,我已经阅读了更多关于这个话题



这是我的问题(请记住我是这个新手)


  1. 如何使用光流来检测即将发生的碰撞?通过这个我的意思是,假设我能够从函数cv :: calcOpticalFlowPyrLK()获得正确的结果,我如何使它从那里检测即将发生的碰撞与框架上的任何对象?是否可以测量距离我们最有可能碰撞的对象的距离?


  2. 是否有一个示例工作项目实现此或任何类似的功能可以看看。我看了一下eosgarden.com上的项目,但是没有功能被实现。


  3. 在上面的示例代码中, lastCorners到UIImage,我在屏幕上显示该图像。这显示一个图像,只有在屏幕上有彩色水平线,没有类似于我原来的测试图像。这是该函数的正确输出?


  4. 我有点难以理解此项目中使用的数据类型。 InputArray,OutputArray等是OpenCV API接受的类型。然而在processFrame函数中,cv :: Mat被传递给Canny边缘检测方法。是否为prevImage和nextImage传递cv :: Mat到calcOpticalFlowPyrLK()?


p>

更新:找到此示例项目(http://www.hatzlaha.co.il/150842/Lucas-Kanade-Detection-for-the -苹果手机)。它不在我的mac上编译,但我认为从这我会有一个工作代码的光流。但仍然我不能想出,我如何能够从跟踪这些点检测阻碍碰撞。如果任何人甚至可以回答Qts。

这看起来光流是用来计算FoE(扩展焦点)。可以有多个FoE候选。并且使用FoE,TTC(时间到碰撞)到达。我不是很清楚后面部分。但是,我到目前为止是正确的吗? OpenCV是否实现FoE和/或TTC?

解决方案

1




如何使用光流来检测即将发生的碰撞?


从来没有使用光流,但第一个google请求给了我这篇文章:



使用光流障碍检测



我不知道你是否已经读过它。它显示如何估计每个角度的联系时间。



2




这显示了一个图像,在屏幕上只有彩色的水平线,没有类似我原来的测试图像。


of goodFeaturesToTrack不是一个图像,而是一张点表。例如,请参阅它们在Python示例中的用法 (在旧版本的OpenCV)。同样可能适用于calcOpticalFlowPyrLK的输出。首先看看调试有什么。我通常使用Python + OpenCV来了解不熟悉的OpenCV函数的输出结果。



4




我很难理解这个项目中使用的数据类型。 InputArray,OutputArray等是OpenCV API接受的类型。然而在processFrame函数中,cv :: Mat被传递给Canny边缘检测方法。对于prevImage和nextImage,我将cv :: Mat传递给calcOpticalFlowPyrLK()?


文档


这是用于将只读输入数组传递到OpenCV函数的代理类。
....
_InputArray 是一个可以从 Mat code> Mat_< T> , Matx std :: vector< T> std :: vector< std :: vector< T> > std :: vector< Mat>


所以你可以通过 Mat 。一些较旧的函数仍然只需要 Mat


I'm working on a project where I need to implement collision avoidance using OpenCV. This is to be done on iOS (iOS 5 and above will do).

Project Objective: The idea is to mount an iPad on the car's dashboard and launch the application. The application should grab frames from the camera and process these to detect if the car is going to collide with any obstacle.

I'm a novice to any sort of image processing, hence I'm getting stuck at conceptual levels in this project.

What I've done so far:

  • Had a look at OpenCV and read about it on the net. Collision avoidance is implemented using Lukas-Kanade Pyramid method. Is this right?
  • Using this project as a starting point: http://aptogo.co.uk/2011/09/opencv-framework-for-ios/ It successfully runs on my iPad and capture functionality works as well, which means camera capture is well-integrated. I changed the processFrame implementation to try Optical Flow instead of Canny edge detection. Here is the function (incomplete yet).

        -(void)processFrame {
        int currSliderVal = self.lowSlider.value;
        if(_prevSliderVal == currSliderVal) return;
        cv::Mat grayFramePrev, grayFrameLast, prevCorners, lastCorners, status, err;
    
        // Convert captured frame to grayscale for _prevFrame
        cv::cvtColor(_prevFrame, grayFramePrev, cv::COLOR_RGB2GRAY);
        cv::goodFeaturesToTrack(grayFramePrev, prevCorners, 500, 0.01, 10);
        // Convert captured frame to grayscale for _lastFrame
        cv::cvtColor(_lastFrame, grayFrameLast, cv::COLOR_RGB2GRAY);
        cv::goodFeaturesToTrack(grayFrameLast, lastCorners, 500, 0.01, 10);
    
        cv::calcOpticalFlowPyrLK(_prevFrame, _lastFrame, prevCorners, lastCorners, status, err);
        self.imageView.image = [UIImage imageWithCVMat:lastCorners];
        _prevSliderVal = self.lowSlider.value;
    }
    

  • Read about Optical Flow and how it is used (conceptually) to detect impending collision. Summary: If an object is growing in size, but moving towards any edge of the frame, then it is not a collision path. If an object is growing in size, but not moving towards any edge, then it is on collision path. Is this right?
  • This project (http://se.cs.ait.ac.th/cvwiki/opencv:tutorial:optical_flow) appears to be doing exactly what I want to achieve. But I did not understand how it is doing so by reading the code. I cannot run it as I don't have a linux box. I read the explanation on this web-page, it seems to arrive at an homograph matrix. How is this result used in collision avoidance?

In addition to the above four points mentioned, I have read a lot more about this topic but still can't put all the pieces together.

Here are my questions (please remember I'm a novice at this)

  1. HOW is optical flow used to detect impending collision? By this I mean, supposing I'm able to get correct result from the function cv::calcOpticalFlowPyrLK(), how do I take it forward from there to detect impending collision with any object on the frame? Is it possible to gauge distance from the object we are most likely to collide with?

  2. Is there a sample working project which implements this or any similar functionality that I can have a look at. I had a look at the project on eosgarden.com, but no functionality seemed to be implemented in it.

  3. In the above sample code, I'm converting lastCorners to UIImage and I'm displaying that image on screen. This shows me an image which only has colored horizontal lines on the screen, nothing similar to my original test image. Is this the correct output for that function?

  4. I'm having a little difficulty understanding the datatypes used in this project. InputArray, OutputArray etc are the types accepted by OpenCV APIs. Yet in processFrame function, cv::Mat was being passed to Canny edge detection method. Do I pass cv::Mat to calcOpticalFlowPyrLK() for prevImage and nextImage?

Thanks in advance :)

Update: Found this sample project (http://www.hatzlaha.co.il/150842/Lucas-Kanade-Detection-for-the-iPhone). It does not compile on my mac, but I think from this I'll have a working code for optical flow. But still I cannot figure out, how I can detect impeding collision from tracking those points. If any of you can even answer Qts. No. 1, it will be of great help.

Update It looks like optical flow is used to calculate FoE (Focus of Expansion). There can be multiple FoE candidates. And using FoE, TTC (Time To Collision) is arrived at. I'm not very clear on the latter part. But, I am correct so far? Does OpenCV implement FoE and/or TTC?

解决方案

1

HOW is optical flow used to detect impending collision?

I've never used optical flow, but the first google request gave me this paper:

Obstacle Detection using Optical Flow

I don't know if you've already read it. It shows how to estimate time to contact at every angle.

2

This shows me an image which only has colored horizontal lines on the screen, nothing similar to my original test image.

I suppose that output of goodFeaturesToTrack is not an image, but a table of points. See, for example, how they are used in a Python example (in the old version of OpenCV). The same probably applies to the output of calcOpticalFlowPyrLK. Look what's there in debug first. I usually use Python + OpenCV to understand what's the output of unfamiliar OpenCV functions.

4

I'm having a little difficulty understanding the datatypes used in this project. InputArray, OutputArray etc are the types accepted by OpenCV APIs. Yet in processFrame function, cv::Mat was being passed to Canny edge detection method. Do I pass cv::Mat to calcOpticalFlowPyrLK() for prevImage and nextImage?

From the documentation:

This is the proxy class for passing read-only input arrays into OpenCV functions. .... _InputArray is a class that can be constructed from Mat, Mat_<T>, Matx<T, m, n>, std::vector<T>, std::vector<std::vector<T> > or std::vector<Mat>. It can also be constructed from a matrix expression.

So you can just pass Mat. Some older functions still expect only Mat.

这篇关于在iPad上使用OpenCV避免碰撞的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆