如何使用PCA缩小尺寸 [英] How to Use PCA to Reduce Dimension
本文介绍了如何使用PCA缩小尺寸的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
输入: LBP是从尺寸为75520的图像中提取的特征,因此输入的LBP数据包含1行和75520列.
Input : LBP Feature extracted from an image with dimension 75520, so the input LBP data contains 1 row and 75520 columns.
必需的输出:在输入上应用PCA以减小尺寸,
Required Output: Apply PCA on input to reduce the dimension,
当前我的代码如下,
void PCA_DimensionReduction(Mat &src, Mat &dst){
int PCA_DIMENSON_VAL 40
Mat tmp = src.reshape(1,1); //1 rows X 75520 cols
Mat projection_result;
Mat input_feature_vector;
Mat norm_tmp;
normalize(tmp,input_feature_vector,0,1,NORM_MINMAX,CV_32FC1);
PCA pca(input_feature_vector,Mat(),CV_PCA_DATA_AS_ROW, PCA_DIMENSON_VAL);
pca.project(input_feature_vector,projection_result);
dst = projection_result.reshape(1,1);
}
基本上,我正在使用此功能来匹配两个图像之间的相似性,但是由于没有应用PCA,我无法获得正确的结果.
Basically I am using this features to match similarity between two images, but I am not getting proper result as without applying PCA.
任何帮助将不胜感激...
Any help will be appreciated...
致谢
哈里斯...
推荐答案
,您将不得不从很多图像中收集特征向量,从中(离线)制作一个pca,然后再使用均值&投影的特征向量.
you will have to collect feature vectors from a lot of images, make a single pca from that (offline), and later use the mean & eigenvectors for the projection.
// let's say, you have collected 10 feature vectors a 30 elements.
// flatten them to a single row (reshape(1,1)) and push_back into a big Data Mat
Mat D(10,30,CV_32F); // 10 rows(features) a 30 elements
randu(D,0,10); // only for the simulation here
cerr << D.size() << endl;
// [30 x 10]
// now make a pca, that will only retain 6 eigenvectors
// so the later projections are shortened to 6 elements:
PCA p(D,Mat(),CV_PCA_DATA_AS_ROW,6);
cerr << p.eigenvectors.size() << endl;
// [30 x 6]
// now, that the training step is done, we can use it to
// shorten feature vectors:
// either keep the PCA around for projecting:
// a random test vector,
Mat v(1,30,CV_32F);
randu(v,0,30);
// pca projection:
Mat vp = p.project(v);
cerr << vp.size() << endl;
cerr << vp << endl;
// [6 x 1]
// [-4.7032223, 0.67155731, 15.192059, -8.1542597, -4.5874329, -3.7452228]
// or, maybe, save the pca.mean and pca.eigenvectors only, and do your own projection:
Mat vp2 = (v - mean) * eigenvectors.t();
cerr << vp2.size() << endl;
cerr << vp2 << endl;
//[6 x 1]
//[-4.7032223, 0.67155731, 15.192059, -8.1542597, -4.5874329, -3.7452228]
好吧,这是不利的一面:从4.4k火车图像计算pca,75k特征元素将像是美好的一天;)
well, oh, here's the downside: calculating a pca from 4.4k train images a 75k feature elements will take like a good day ;)
这篇关于如何使用PCA缩小尺寸的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文