在Matlab中计算神经网络的方程式 [英] Equation that compute a Neural Network in Matlab

查看:336
本文介绍了在Matlab中计算神经网络的方程式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个神经网络matlab。这是脚本:

 

load dati.mat;
inputs = dati(:,1:8)';
targets = dati(:,9)';
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
net.inputs {1} .processFcns = {'removeconstantrows','mapminmax','mapstd','processpca'};
net.outputs {2} .processFcns = {'removeconstantrows','mapminmax','mapstd','processpca'};

net = struct(net);
net.inputs {1} .processParams {2} .ymin = 0;
net.inputs {1} .processParams {4} .maxfrac = 0.02;
net.outputs {2} .processParams {4} .maxfrac = 0.02;
net.outputs {2} .processParams {2} .ymin = 0;
net = network(net);

net.divideFcn ='divideind';
net.divideMode ='sample'; %分割每个样本
net.divideParam.trainInd = 1:428;
net.divideParam.valInd = 429:520;
net.divideParam.testInd = 521:612;
net.trainFcn ='trainscg'; %Scaled共轭梯度反向传播
net.performFcn ='mse'; %均方误差
net.plotFcns = {'plotperform','plottrainstate','ploterrhist','plotregression','plotconfusion','plotroc'};
net = init(net);
net.trainParam.max_fail = 20;

[net,tr] =火车(净,投入,目标);

outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)

现在我想保存权重和偏差网络和写方程。
我保存了权重和偏差:

 

W1 = net.IW {1,1};
W2 = net.LW {2,1};
b1 = net.b {1,1};
b2 = net.b {2,1};

所以,我已经完成了数据预处理,我写了以下公式。

 

max_range = 0;
[y,ps] = removeconstantrows(input,max_range);

ymin = 0;
ymax = 1;
[y,ps2] = mapminmax(y,ymin,ymax);

ymean = 0;
ystd = 1;
y = mapstd(x,ymean,ystd);

maxfrac = 0.02;
y = processpca(y,maxfrac);

in = y';

uscita = tansig(W2 *(tansig(W1 * in + b1))+ b2);

但使用相同的输入输入= [1:8]我得到不同的结果。为什么?怎么了?
请帮帮我!这很重要!



我使用Matlab R2010B

解决方案

看起来像您是预处理输入,而不是后处理输出。后处理使用反向处理形式。 (目标预处理,因此输出被反向处理)。


I created a neural network matlab. This is the script:


    load dati.mat;
    inputs=dati(:,1:8)';
    targets=dati(:,9)';
    hiddenLayerSize = 10;
    net = patternnet(hiddenLayerSize);
    net.inputs{1}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};
    net.outputs{2}.processFcns = {'removeconstantrows','mapminmax', 'mapstd','processpca'};

    net = struct(net);
    net.inputs{1}.processParams{2}.ymin = 0;
    net.inputs{1}.processParams{4}.maxfrac = 0.02;
    net.outputs{2}.processParams{4}.maxfrac = 0.02;
    net.outputs{2}.processParams{2}.ymin = 0;
    net = network(net);

    net.divideFcn = 'divideind';  
    net.divideMode = 'sample';  % Divide up every sample
    net.divideParam.trainInd = 1:428;
    net.divideParam.valInd = 429:520;
    net.divideParam.testInd = 521:612;
    net.trainFcn = 'trainscg';  % Scaled conjugate gradient backpropagation
    net.performFcn = 'mse';  % Mean squared error
    net.plotFcns = {'plotperform','plottrainstate','ploterrhist', 'plotregression', 'plotconfusion', 'plotroc'};
    net=init(net);
    net.trainParam.max_fail=20;

    [net,tr] = train(net,inputs,targets);

    outputs = net(inputs);
    errors = gsubtract(targets,outputs);
    performance = perform(net,targets,outputs)

Now I want to save the weights and biases of the network and write the equation. I had saved the weights and biases:


    W1=net.IW{1,1};
    W2=net.LW{2,1};
    b1=net.b{1,1};
    b2=net.b{2,1};

So, I've done the data preprocessing and I wrote the following equation


    max_range=0;
    [y,ps]=removeconstantrows(input, max_range);

    ymin=0;
    ymax=1;
    [y,ps2]=mapminmax(y,ymin,ymax);

    ymean=0;
    ystd=1;
    y=mapstd(x,ymean,ystd);

    maxfrac=0.02;
    y=processpca(y,maxfrac);

    in=y';

    uscita=tansig(W2*(tansig(W1*in+b1))+b2);

But with the same input input=[1:8] I get different results. why? What's wrong? Help me please! It's important!

I use Matlab R2010B

解决方案

It looks like you are pre-processing the inputs but not post-processing the outputs. Post processing uses the "reverse" processing form. (Targets are pre-processed, so outputs are reverse processed).

这篇关于在Matlab中计算神经网络的方程式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆