用户启动的实时对象跟踪系统开发 [英] User Initiated Real Time Object Tracking System Development

查看:59
本文介绍了用户启动的实时对象跟踪系统开发的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在做一个名为用户启动的实时对象跟踪系统的项目。这里是我想要在项目中发生的




1)从网络摄像头连续播放。



2) 使用鼠标指针,用户可以在感兴趣的物体周围画一个正方形。



3)然后从那里从广场开始,广场与感兴趣的物体一起移动。因此,

跟踪每个地方,物体移动,因此标题,物体跟踪。





当前进度





我使用过dshownet(.NET wrapper for DirectShow)从网络摄像头获取输入。并且

我正在将视频拆分为帧。我有4种方法来做项目:





技术1



|>有一个已保存的视频

|>我装了它。

|>当视频运行时,我暂停(使用暂停按钮),在特定场景,并在对象上绘制一个正方形。

|>当我按下播放按钮时,方块将与对象一起移动,没有/ 5秒

处理时间[OR]我将给出应用程序,一些处理时间(例如3分钟),然后

它会播放,从那时开始跟踪,发生。





技术2





|>有一个已保存的视频

|>我装了它。

|>当视频正在运行时,我不会暂停它,但会快速在对象周围画一个正方形(当

对象仍处于某个时刻时)。

|>然后在没有处理时间的情况下跟踪对象。 [或]带一些处理时间

(延迟10秒)使文件播放更长时间。





技术3





|>我从网络摄像头输入1分钟。

|>将该视频保存到文件中

|>并执行技术1或技术2





技巧4 - (显然这看起来更难,因为那里有很多条件需要关注

就像房间里的灯光一样..)





|>连续从网络摄像头获取输入

|>当对象显示没有移动时(例如当一个人坐在椅子上时为


|>在对象周围画一个正方形而没有任何暂停。然后通过移动方块和物体来显示跟踪,没有处理时间[OR]

2秒的轻微处理时间,这样延迟不明显。





要跟踪的对象: -



基本上我可以跟踪任何东西,因为我用鼠标在视频上画画,



|>我打算跟踪整个身体(但如果这很麻烦..下一个选项)

|>我会试着追踪一个人的脸(显然是用鼠标指针绘制区域。)







进度:



|>分裂仍然有错误。 (有人建议先开始分割一个

保存的视频,我现在正在尝试这个)



< b>我的问题





1)我需要知道哪个允许使用技术,因为在技术4中,流是连续的
,在我的批次的大多数项目中,视频流有一个时间限制,之后处理

发生。



2)我可以在1个半月的时间范围内实施哪种技术(四个中的哪一个)?



3)要编码,java +一些适合这个的java框架或者带有emgucv / AForge.net / Dshownet的C#.net顺便说一句,我对java的知识很好,在C#.net中不太好。??









提前致谢 :) :-D :-D;);)

I am doing a project, called user initiated real time object tracking system. Here, is what I
want to happen in the project:

1) Take a continuous stream from a web camera.

2) Using the mouse pointer, a user can draw a square, around an object of interest.

3) Then from there onwards, the square moves along with the object of interest. Thereby,
tracking each and every place, the object moves, hence the title, object tracking.


Current Progress


I have used dshownet(.NET wrapper for DirectShow)to take input from the web camera. And
I am in the process of splitting the video to frames. I have 4 ways in mind, to do the project :


Technique 1

|> There is a saved video
|> I load it.
|> when the video is running, i pause (using a pause button) it, at a particular scene and draw
a square on an object.
|> And when i press play button the square will move along with the object with no/5 seconds
processing time [OR] I will give the application, some processing time(e.g. 3 minutes), and then
it will play, from that point onwards with the tracking, taking place.


Technique 2


|> There is a saved video
|> I load it.
|> when the video is running i dont pause it but quickly draw a square around an object (when the
object is still at some point).
|> Then the object will be tracked after that with no processing time. [OR] with some processing time
(10 sec delay) making the file to play for a little greater time.


Technique 3


|> I take an input from a web cam for 1 min.
|> Save that video to a file
|> And perform Technique 1 or Technique 2


Technique 4 - (Apparently this seems alot harder, since there are lot of condition to be concerned about
like the lighting in the room, etc..)


|> Take input from a web cam continuously
|> Draw a square around the object without any pausing, when the object shows no movement (for e.g.
when a person is sitting down on a chair)
|> And then show the tracking by moving the square along with the object with no processing time [OR]
slight processing time of 2 secs such that the delay is not significantly apparent.


Objects to track :-

Basically I can track anything, since I use the mouse to draw on the video,

|> I am planning to track the whole body (but if this is troublesome.. next option)
|> I would try to track the face of an individual (obviously by drawing the area with a mouse pointer.)



Progress :

|> Still getting errors with the splitting. (Someone suggested to start splitting a
saved video first, and I am in the process of trying that now)

MY QUESTIONs


1) I need to know which technique(s) is allowed to use, since in Technique 4 the stream is
continuous and in most projects of my batch the video stream has a time limit after which the processing
happens.

2) Which Technique (out of the four) could I possibly implement in 1 and 1/2 months time frame ?

3) To code, is java + some java framework good for this or C#.net with emgucv/AForge.net/Dshownet [by the way my knowledge in java is good and not so good in C#.net]??




Thanks in advance :) :-D :-D ;) ;)

推荐答案

这进入真正实时和wh的论证在足够实时。如果这是用于数码相机,您可能每隔五秒钟左右捕获一帧。如果你试图在地铁上捕捉面孔,以便将它们与犯罪数据库或类似的数据库进行比较,那么已经有代码可以做到这一点并且这将是一个很好的起点。



如果您的网络摄像头软件可以处理,我会尝试修改技术4。每隔几秒抓一张图像处理图像,根据您在静止图像中找到的内容调整实时图像上的矩形,然后增加图像,直到您处于实时边缘。这可能需要一些试验和错误,但只是继续减少图片之间的间隔,直到你开始遇到性能问题,然后稍微退一步。



我做了一些事情就像这样,之前我写了一个搜索图像条形码的软件。我在静止图像中找到条形码,然后在近乎实时的情况下将协调在XY CE覆盖图上投影到Windows CE手持设备屏幕上。当用户移动相机时,该盒子需要几秒钟来调整,但它有效。



我会研究那些第三方DLL来处理这个问题。它们将为您节省大量时间。



Ryan McBeth
This kind of gets into the argument of what is really "real time" and what is "real time enough." If this is for a digital camera, you can probably get away with capturing one frame every five seconds or so. If you are trying to capture faces on a subway so that you can compare them to a criminal database or something like that, then there is already code to do that and This would be a good place to start.

I would try a modified Technique 4 if your webcam software can handle that. Grab an image every couple of seconds and process the image, adjust the rectangle on the live feed based on what you have found in the still image, then increase the picture taking until you are at the edge of real time. This may take some trial and error, but just keep decreasing the interval between pictures until you start running into performance problems and then back it off a little.

I did something like this before when I wrote a piece of software that searched for barcodes in images. I found the barcode in a still image and then projected the coordinance in an XY overlay on a Windows CE handheld screen in near real time. When the user moved the camera, the box took a few seconds to adjust, but it worked.

I would research those third party DLLs for handling this. They will save you a boatload of time.

Ryan McBeth


首先要做的事情。



你这篇文章中的用户界面如此关注,但是你有可以跟踪对象的工作代码吗?这比用户用鼠标画一个盒子要难得多。



你遇到的问题是计算机视觉还处于起步阶段。你如何确定一个对象看起来像是从背景中分离出来?如何解决物体随时间变化的形状或视角的问题?如果您没有视频Feed中相关对象的深度数据,那会变得更加复杂。



祝您好运!你将需要它在一个月内完成。
First things first.

You''re so concerned about the UI in this post, but do you have working code that can track an object at all?? That''s going to be FAR harder than having the user draw a box with the mouse.

The problem you have is that computer vision is in its infancy. How are you going to determine what an object looks like seperate from the background?? How are you going to solve the problem that an object can change shape or perspective over time?? That gets more complicated if you don''t have depth data for the object in question in the video feed.

Good Luck! You''re going to need it to get this done in a month.


这篇关于用户启动的实时对象跟踪系统开发的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆