OpenCV 3.1 ANN预测收益率 [英] OpenCV 3.1 ANN predict returns nan

查看:80
本文介绍了OpenCV 3.1 ANN预测收益率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用OpenCV ANN库实现神经网络.我有一个可行的解决方案,但是升级到OpenCV 3.1后,它停止了工作.因此,我创建了一个简化的测试代码,但问题仍然存在. ANN已成功训练,但是当我尝试从trainData中调用带有行的预测时,它将返回nan值的Mat.代码是

I am trying to implement Neural network with OpenCV ANN Library. I had a working solution, but after upgrading to OpenCV 3.1 it stopped working. So I created a simplified code for testing, but problem still remains. ANN is successfully trained, but when I try to call predict with row from trainData, it returns Mat of nan values. The code is

cv::Ptr< cv::ml::ANN_MLP > nn = cv::ml::ANN_MLP::create();
nn->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM);
nn->setTrainMethod(cv::ml::ANN_MLP::BACKPROP);
nn->setBackpropMomentumScale(0.1);
nn->setBackpropWeightScale(0.1);
nn->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER, (int)100000, 1e-6));

cv::Mat trainData(15, 4, CV_32FC1);
trainData.at<float>(0, 0) = 5.5f; trainData.at<float>(0, 1) = 3.5f; trainData.at<float>(0, 2) = 1.3f; trainData.at<float>(0, 3) = 0.2f;
trainData.at<float>(1, 0) = 6.5f; trainData.at<float>(1, 1) = 2.8f; trainData.at<float>(1, 2) = 4.5999999f; trainData.at<float>(1, 3) = 1.5f;
trainData.at<float>(2, 0) = 6.3000002f; trainData.at<float>(2, 1) = 2.3f; trainData.at<float>(2, 2) = 4.4000001f; trainData.at<float>(2, 3) = 1.3f;
trainData.at<float>(3, 0) = 6.0f; trainData.at<float>(3, 1) = 2.2f; trainData.at<float>(3, 2) = 4.0f; trainData.at<float>(3, 3) = 1.0f;
trainData.at<float>(4, 0) = 4.5999999f; trainData.at<float>(4, 1) = 3.0999999f; trainData.at<float>(4, 2) = 1.5f; trainData.at<float>(4, 3) = 0.2f;
trainData.at<float>(5, 0) = 5.0f; trainData.at<float>(5, 1) = 3.2f; trainData.at<float>(5, 2) = 1.2f; trainData.at<float>(5, 3) = 0.2f;
trainData.at<float>(6, 0) = 7.4000001f; trainData.at<float>(6, 1) = 2.8f; trainData.at<float>(6, 2) = 6.0999999f; trainData.at<float>(6, 3) = 1.9f;
trainData.at<float>(7, 0) = 6.0f; trainData.at<float>(7, 1) = 2.9000001f; trainData.at<float>(7, 2) = 4.5f; trainData.at<float>(7, 3) = 1.5f;
trainData.at<float>(8, 0) = 5.0f; trainData.at<float>(8, 1) = 3.4000001f; trainData.at<float>(8, 2) = 1.5f; trainData.at<float>(8, 3) = 0.2f;
trainData.at<float>(9, 0) = 6.4000001f; trainData.at<float>(9, 1) = 2.9000001f; trainData.at<float>(9, 2) = 4.3000002f; trainData.at<float>(9, 3) = 1.3f;
trainData.at<float>(10, 0) = 7.1999998f; trainData.at<float>(10, 1) = 3.5999999f; trainData.at<float>(10, 2) = 6.0999999f; trainData.at<float>(10, 3) = 2.5f;
trainData.at<float>(11, 0) = 5.0999999f; trainData.at<float>(11, 1) = 3.3f; trainData.at<float>(11, 2) = 1.7f; trainData.at<float>(11, 3) = 0.5f;
trainData.at<float>(12, 0) = 7.1999998f; trainData.at<float>(12, 1) = 3.0f; trainData.at<float>(12, 2) = 5.8000002f; trainData.at<float>(12, 3) = 1.6f;
trainData.at<float>(13, 0) = 6.0999999f; trainData.at<float>(13, 1) = 2.8f; trainData.at<float>(13, 2) = 4.0f; trainData.at<float>(13, 3) = 1.3f;
trainData.at<float>(14, 0) = 5.8000002f; trainData.at<float>(14, 1) = 2.7f; trainData.at<float>(14, 2) = 4.0999999f; trainData.at<float>(14, 3) = 1.0f;

cv::Mat trainLabels(15, 1, CV_32FC1);
trainLabels.at<float>(0, 0) = 0; trainLabels.at<float>(1, 0) = 0;
trainLabels.at<float>(2, 0) = 0; trainLabels.at<float>(3, 0) = 0;
trainLabels.at<float>(4, 0) = 0; trainLabels.at<float>(5, 0) = 0;
trainLabels.at<float>(6, 0) = 1; trainLabels.at<float>(7, 0) = 0;
trainLabels.at<float>(8, 0) = 0; trainLabels.at<float>(9, 0) = 0;
trainLabels.at<float>(10, 0) = 1; trainLabels.at<float>(11, 0) = 0;
trainLabels.at<float>(12, 0) = 1; trainLabels.at<float>(13, 0) = 0; trainLabels.at<float>(14, 0) = 0;

cv::Mat layers = cv::Mat(3, 1, CV_32SC1);
layers.row(0) = cv::Scalar(trainData.cols);
layers.row(1) = cv::Scalar(4);
layers.row(2) = cv::Scalar(1);
nn->setLayerSizes(layers);
nn->train(trainData, cv::ml::SampleTypes::ROW_SAMPLE, trainLabels);

cv::Mat out;
nn->predict(trainData.row(6), out);

for (int y = 0; y< out.cols; y++) {
    std::cout << out.row(0).col(y) << ",";
}

std::cout << std::endl;

输出为:

[nan]

[nan],

trainData矩阵具有15行和4列,值是手动设置的. trainLabels是15行1列的矩阵.

trainData matrix has 15 rows and 4 columns, values are set manually. trainLabels is matrix of 15 rows and 1 column.

我正在使用Visual Studio 2015,项目是x86.

I am using Visual Studio 2015 and project is x86.

修改 当我使用nn-> save("file")保存算法时,得到以下信息:

Edit When I save the algortihm using nn->save("file") I get following:

<?xml version="1.0"?>
<opencv_storage>
<opencv_ml_ann_mlp>
  <format>3</format>
  <layer_sizes>
    4 2 1</layer_sizes>
  <activation_function>SIGMOID_SYM</activation_function>
  <f_param1>1.</f_param1>
  <f_param2>1.</f_param2>
  <min_val>0.</min_val>
  <max_val>0.</max_val>
  <min_val1>0.</min_val1>
  <max_val1>0.</max_val1>
  <training_params>
    <train_method>BACKPROP</train_method>
    <dw_scale>1.0000000000000001e-01</dw_scale>
    <moment_scale>1.0000000000000001e-01</moment_scale>
    <term_criteria>
      <iterations>100000</iterations></term_criteria></training_params>
  <input_scale>
    3.0610774975484543e+02 -7.2105386030315177e+00
    6.5791999914499740e+02 -7.6542332347898991e+00
    1.4846784833724132e+02 -2.1387134611442429e+00
    3.7586804114718842e+02 -1.5919117803235303e+00</input_scale>
  <output_scale>
    .Inf .Nan</output_scale>
  <inv_output_scale>
    0. 0.</inv_output_scale>
  <weights>
    <_>
      -9.9393472658672849e-02 -2.6465950290426005e-01
      7.0886408359726163e-02 2.9121955862626381e-01
      5.6651702579549310e-02 -2.1540916480791003e-01
      -1.0692250684467182e-01 -2.4494868679529785e-01
      5.2300263291242721e-01 7.7835339395571990e-03</_>
    <_>
      6.8110331452494011e-01 -1.4243818904976885e-01
      -1.7380883866714303e-01</_></weights></opencv_ml_ann_mlp>
</opencv_storage>

推荐答案

好的,尝试了可能的组合一段时间后,我找到了解决方法.

OK, after a while of trying possible combinations I found a solution.

必须在设置图层大小之后设置激活功能.我不知道为什么,但是当我像这样翻转行时

Activation function must be set after setting layer sizes. I dont know exactly why, but when I flip rows like this

nn->setLayerSizes(layers);
nn->setActivationFunction(cv::ml::ANN_MLP::SIGMOID_SYM);

它的工作.如果有人知道原因,请告诉我.

its working. If someone knows the reason of this, please tell me.

这篇关于OpenCV 3.1 ANN预测收益率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆