定义一组(初始)类似Haar的特征 [英] Defining an (initial) set of Haar Like Features

查看:209
本文介绍了定义一组(初始)类似Haar的特征的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于级联分类器(使用类似haar的特征),我总是读到,诸如AdaBoosting之类的方法用于选择最佳"特征进行检测.但是,这只有在有一些初始功能开始增强时才起作用.

When it comes to cascade classifiers (using haar like features) I always read that methods like AdaBoosting are used to select the 'best' features for detection. However this only works if there is some initial set of features to begin boosting.

鉴于24x24像素的图像,有162,336种可能的haar特征.我在这里可能是错的,但是我不认为像openCV这样的库最初会针对所有这些功能进行测试.

Given a 24x24 pixel image there are 162,336 possible haar features. I might be wrong here, but I don't think libraries like openCV initially test against all of these features.

所以我的问题是如何选择初始特征或如何生成它们?是否有关于初始功能数量的准则?

So my question is how are the initial features selected or how are they generated? Is there any guideline about the initial number of features?

并且如果最初使用了所有162,336个功能.它们是如何产生的?

And if all 162,336 features are used initially. How are they generated?

推荐答案

我认为,您对Viola/Jones的

I presume, you're familiar with Viola/Jones' original work on this topic.

首先要手动选择要素类型(例如,矩形A).这为您提供了一个面具,您可以使用它来训练弱分类器.为了避免逐像素移动蒙版并进行重新训练(这将花费大量时间,而且没有任何更好的准确性),您可以为每个经过训练的弱分类器指定特征在x和y方向上的移动量.跳转的大小取决于您的数据大小.目的是使遮罩能够移入和移出检测到的物体.功能的大小也可以是可变的.

You start by manually choosing a feature type (e.g. Rectangle A). This gives you a mask with which you can train your weak classifiers. In order to avoid moving the mask pixel by pixel and retraining (which would take huge amounts of time and not any better accuracy), you can specify how much the feature moves in x and y direction per trained weak classifier. The size of your jumps depend on your data size. The goal is to have the mask be able to move in and out of the detected object. The size of the feature can also be variable.

在使用各自的功能(即遮罩位置)训练了多个分类器之后,您将照常进行AdaBoost和Cascade训练.

After you've trained multiple classifiers with a respective feature (i.e. mask position), you proceed with AdaBoost and Cascade training as usual.

功能/弱分类器的数量高度依赖于您的数据和实验设置(即,您使用的分类器的类型).您将需要扩展测试参数,以了解哪种类型的要素效果最好(矩形/圆形/类似俄罗斯方块的对象等).我在这两年前从事这项工作,花了我们相当长的时间才能评估哪些功能和特征生成启发式算法产生了最佳结果.

The number of features/weak classifiers is highly dependent on your data and experimental setup (i.e. also the type of classifier you use). You'll need to test the parameters extensibly to also know which type of features work best (rectangle/circle/tetris-like objects etc). I worked on this 2 years ago and it took us quite a long time to evaluate which features and feature-generation-heuristics yielded the best results.

如果您想从某个地方开始,只需采用4个Viola/Jones原始功能中的1个并训练将其应用到(0,0)的分类器即可.用(x,0)训练下一个分类器.下一个具有(2x,0)....(0,y),(0,2y),(0,4y),..(x,y),(x,2y)等... 看看会发生什么.您很有可能会发现,弱分类器也可以减少,即可以继续增加x/y步长值,从而确定蒙版的滑动方式.您也可以让口罩长大或做其他事情以节省时间.这种惰性"特征生成起作用的原因是AdaBoost:只要这些特征使分类器比随机分类器好一点,AdaBoost就会将这些分类器组合成有意义的分类器.

If you wanna start somewhere, just take 1 of the 4 original Viola/Jones features and train a classifier applying it anchored to (0,0). Train the next classifier with (x,0). The next with (2x,0)....(0,y), (0,2y), (0,4y),.. (x,y), (x, 2y) etc... And see what happens. Most likely you'll see that it's ok to have less weak classifiers, i.e. you can proceed to increase the x/y step values which determine how the mask slides. You can also have the mask grow or do other stuff to save time. The reason this "lazy" feature generation works is AdaBoost: as long as these features make the classifiers slightly better than random, AdaBoost will combine these classifiers into a meaningful classifier.

这篇关于定义一组(初始)类似Haar的特征的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆