如何进行稳定的眼角检测? [英] how to perform stable eye corner detection?

查看:190
本文介绍了如何进行稳定的眼角检测?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于那些发现太长的人,只要读粗线。



我的项目的凝视估计基于屏幕光标移动HCI现在依赖于最后一件事 - 注视估计,我使用眼角作为参考稳定点,我将检测瞳孔的运动和计算凝视。



但是我无法稳定地从实时网络摄像头检测眼角。我一直在使用cv.CornerHarris()和GFTT - cv.GoodFeaturesToTrack()函数进行角检测。我试过FAST演示(可执行文件从他们的网站)直接在我的眼睛图像,但是不是很好。



这些是我对图片的至今角点检测的一些结果。



使用GFTT:





使用Harris:





视频中会发生什么: / strong>




绿色cirlces是角落,其他角落(在粉红色,较小的圆圈)是其他角落



- 角落将在左或右极点和中间,如果垂直思考。
我这样做,因为在许多情况下拍摄了许多快照,除了少于5%的图像,休息是这样的,对他们来说,上述启发式方法。


$



当我使用方法论时,这些眼角检测是针对快照而不是网络摄像头。 (harris and GFTT)for webcam feed,I just do not get'。



我的用于使用cv.CornerHarris检测眼角的代码



使用GFTT的眼角



现在两种方法中使用的参数 - 结果对不同照明条件和明显。但在与拍摄这些快照的照明条件相同的情况下,我仍然没有从网络摄像头视频查询的帧的结果



来自GFTT的这些参数适用于平均照明条件

  cornerCount = 100 
qualityLevel = 0.1
minDistance = 5

而这些:

  cornerCount = 500 
qualityLevel = 0.005
minDistance = 30

对于上面显示的静态图片非常有用



minDistance = 30,至少那么远的距离,再次,一个趋势,我看到从我的快照。但是我把它降低了GFTT的网络摄像头版本,因为那时我根本没有任何角度。



此外,对于GFTT的实时馈送版本,小改变我不得不适应:

  cv.CreateImage((colorImage.width,colorImage.height),8,1)

而对于静态图像版本(pastebin上的代码),我使用:

  cv.CreateImage(cv.GetSize(grayImage),cv.IPL_DEPTH_32F,1)

注意深度。



这会改变任何检测质量吗?



眼睛图像我通过GFTT方法没有32F的深度,所以我不得不改变它,根据其余的临时图像(eignenimg,tempimg ,等)



底线:我必须完成凝视估计,但没有稳定的眼角检测我不能进步..我必须继续眨眼检测和模板匹配的学生跟踪(或你知道更好吗?)。简单来说,我想知道,如果我做任何菜鸟的错误或不做的事情,阻止我在我的网络摄像头视频流中接近完美的眼角检测,我在我的快照在这里发布。



无论如何,感谢您提供此视图。 如果你没有得到我在我的代码(我如何得到左右拐角),我会解释:

  max_dist = 0 
maxL = 20
maxR = 0

lc = 0
rc = 0

maxLP = 0,0)
maxRP =(0,0)

对于point in cornerMem:
center = int(point [0]),int(point [1])

x = point [0]
y = point [1]


if(x< colorImage.width / 5或x>((colorImage.width / 4)* 3))和(y> 40和y <70):
#cv.Circle(image,(x,y),2,cv.RGB(155,0,25))

如果maxL> x:
maxL = x
maxLP = center


如果maxR < x:
maxR = x
maxRP = center

dist = maxR-maxL

如果max_dist< dist:
max_dist = maxR- maxL
lc = maxLP
rc = maxRP





cv.Circle(colorImage,(center),1 ,(200,100,255))#for each corner

cv.Circle(colorImage,maxLP,3,cv.RGB(0,255,0))#for left eye corner
cv.Circle (colorImage,maxRP,3,cv.RGB(0,255,0))#用于右眼角

maxLP和maxRP将分别存储左眼和右眼角的(x,y)。
我在这里做的是,分别采取一个变量的左右角检测,maxL和maxR,将
与检测到的角的x值进行比较。现在简单地说,对于maxL,它必须是大于0的值;我分配它20,因为如果
左拐角在(x,y),其中x <20,则maxL将是= x,或者如果说,即,以这种方式找到最左边的X坐标。类似地,对于最右边的角落。



我也尝试maxL = 50(但这意味着左角几乎在眼睛区域的中间)



此外, max_dist存储到目前为止看到的最大距离X坐标,并且因此给出了对角
将是左眼角和右眼角的测量 - 具有最大距离的角= max_dist



此外,我从我的快照中看到,眼角的坐标在40-70之间,所以我也使用太少,以减少
候选池

  if(x< colorImage.width / 5或x>((colorImage.width / 4)* 3))和(y> 40和y <70):


b:$ b

:(x <(w / 5)或x>((w / 4) )* 3))和(y> int(h * 0.45)和y

因为早期我只是手动查看像素值,超过了我的窗口,在那里可以找到具有最高的概率。但后来我意识到,让它一般,所以我做了一个水平窗口45到65个Y范围,1/5到3/4的X范围,因为这是常见的区域内的角落。



对不起,回复迟到了,我很忙的项目的后期部分 - 注视估计。



顺便说一下,这里有几个眼角和瞳孔在我眼中检测到的图片:
(放大到100x100)








Hope这将对从这一领域开始的其他人有用。


For those who find it too long, just read the bold lines.

My project of gaze estimation based screen cursor moving HCI is now dependent on one last thing - gaze estimation, for which i'm using eye corners as a reference stable point relative to which i will detect the movement of the pupil and calculate the gaze.

But i haven't been able to stably detect eye corners from live webcam feed. I've been using cv.CornerHarris() and GFTT - cv.GoodFeaturesToTrack() functions for corner detection. I tried FAST demo (the executable from their website) directly on my eye images but that wasn't good.

These are some results of my so far corner detections for images.

Using GFTT:

Using Harris:

what happens in video:

The green cirlces are the corners, the others (in pink, smaller circles) are the other corners

I used a certain heuristic - that the corners will be in the left or right extremeties and around the middle if thinking vertically. I've done that because after taking many snapshots in many conditions, except for less than 5% of the images, rest are like these, and for them the above heuristics hold.

But these eye corner detections are for snapshots - not from the webcam feed.

When i use methodologies (harris and GFTT) for webcam feed, i just don't get 'em.

My code for eye corner detection using cv.CornerHarris

Eye corners using GFTT

Now the parameters i use in both methods - they don't show results for different lighting conditions and obviously. But in the same lighting condition as the one in which these snapshots were taken, i'm still not getting the result for the frames i queried from webcam video

These parameters from GFTT work good for average lighting conditions

cornerCount = 100
qualityLevel = 0.1
minDistance = 5

whereas these :

    cornerCount = 500
    qualityLevel = 0.005
    minDistance = 30

worked good for the static image displayed above

minDistance = 30 because obviously the corners would have atleast that much distance, again, something of a trend i saw from my snaps. But i lowered it for the webcam feed version of GFTT because then i wasn't getting any corners at all.

Also, for the live feed version of GFTT, there's a small change i had to accomodate:

cv.CreateImage((colorImage.width, colorImage.height), 8,1)

whereas for the still image version (code on pastebin) i used:

cv.CreateImage(cv.GetSize(grayImage), cv.IPL_DEPTH_32F, 1)

Pay attention to the depths.

Would that change any quality of detection??

The eye image i was passing the GFTT method didn't have a depth of 32F so i had to change it and according the rest of the temporary images (eignenimg, tempimg ,etc)

Bottom line: I've to finish gaze estimation but without stable eye corner detection i can't progress.. and i've to get on to blink detection and template matching based pupil tracking (or do you know better?). Put simply, i want to know if i'm making any rookie mistakes or not doing things which are stopping me from getting the near perfect eye corner detection in my webcam video stream, which i got in my snaps i posted here.

Anyways thanks for giving this a view. Any idea how i could perform eye corner detection for various lighting conditions would be very helpful

Okay, if you didn't get what i'm doing in my code (how i'm getting the left and right corners), i'll explain:

max_dist = 0
maxL = 20
maxR = 0

lc =0
rc =0

maxLP =(0,0)
maxRP =(0,0)

for point in cornerMem:
    center = int(point[0]), int(point[1])

    x = point[0]
    y = point[1]


    if ( x<colorImage.width/5 or x>((colorImage.width/4)*3) ) and (y>40 and y<70):
                      #cv.Circle(image,(x,y),2,cv.RGB(155, 0, 25))

                      if maxL > x:
                               maxL = x
                               maxLP = center


                      if maxR < x:
                               maxR = x
                               maxRP = center

                      dist = maxR-maxL

                      if max_dist<dist:
                           max_dist = maxR-maxL
                           lc = maxLP
                           rc = maxRP





    cv.Circle(colorImage, (center), 1, (200,100,255)) #for every corner

cv.Circle(colorImage,maxLP,3,cv.RGB(0, 255, 0)) # for left eye corner
cv.Circle(colorImage,maxRP,3,cv.RGB(0,255,0))   # for right eye corner

maxLP and maxRP will store the (x,y) for left and right corners of the eye respectively. What i'm doing here is, taking a variable for left and right corner detection, maxL and maxR respectively, which will be compared to the x-values of the corners detected. Now simply, for maxL, it has to be something more than 0; I assigned it 20 because if the left corner is at (x,y) where x<20, then maxL will be = x, or if say, ie, the leftest corner's X-ordinate is found this way. Similarly for rightest corner.

I tried for maxL = 50 too (but that would mean that the left corner is almost in the middle of the eye region) to get more candidates for the webcam feed - in which i'm not getting any corners at all

Also, max_dist stores the maximum distance between the so far seen X-ordinates, and thus gives a measure of which pair of corners would be left and right eye corners - the one with the maximum distance = max_dist

Also, i've seen from my snapshots that the eye corners' Y-ordinates fall in between 40-70 so i used that too to minimize the candidate pool

解决方案

i changed this

if ( x<colorImage.width/5 or x>((colorImage.width/4)*3) ) and (y>40 and y<70):

to this:

if ( x<(w/5) or x>((w/4)*3) ) and (y>int(h*0.45) and y<int(h*0.65)):

because earlier i was just manually looking at pixel values beyond which i my windows where corners could be found with the highest probability. But then afterwards i realised, lets make it general, so i made a horizontal window of 45 to 65 pc of the Y range, and 1/5th to 3/4ths for X range, because that's the usual area within which the corners are.

I'm sorry guys for replying late, i was busy with the later part of the project - gaze estimation. And i'm gonna post a question about it, i'm stuck in it.

by the way, here are few pictures of eye corners and pupil detected in my eye: (enlarged to 100x100)

Hope this will be useful for others beginning in this area.

这篇关于如何进行稳定的眼角检测?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆