自己尝试在MATLAB中模拟一个神经网络 [英] Try to simulate a neural network in MATLAB by myself

查看:24
本文介绍了自己尝试在MATLAB中模拟一个神经网络的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图创建一个神经网络来估计 y = x ^ 2.所以我创建了一个拟合神经网络并给了它一些输入和输出的样本.我试图用 C++ 构建这个网络.但是结果和我预想的不一样.

I tried to create a neural network to estimate y = x ^ 2. So I created a fitting neural network and gave it some samples for input and output. I tried to build this network in C++. But the result is different than I expected.

使用以下输入:

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 2627 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 4950 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1-2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26-27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51-52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 -15 -16 -17 -18 -19 -20 -21 -22 -23 -24 -25 -26 -27 -28 -29 -30 -31 -32 -33 -34 -35 -36 -37 -38 -39 -40 -41 -42 -43 -44 -45 -46 -47 -48 -49 -50 -51 -52 -53 -54 -55 -56 -57 -58 -59 -60 -61 -62 -63 -64 -65 -66 -67 -68 -69 -70 -71

以及以下输出:

0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 12961369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 25002601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 40964225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 18491936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 32493364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041

0 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625 676 729 784 841 900 961 1024 1089 1156 1225 1296 1369 1444 1521 1600 1681 1764 1849 1936 2025 2116 2209 2304 2401 2500 2601 2704 2809 2916 3025 3136 3249 3364 3481 3600 3721 3844 3969 4096 4225 4356 4489 4624 4761 4900 5041

我使用了拟合工具网络.与矩阵行.训练占 70%,验证占 15%,测试占 15%.隐藏神经元的数量是两个.然后在命令行中我写了这个:

I used fitting tool network. with matrix rows. Training is 70%, validation is 15% and testing is 15% as well. The number of hidden neurons is two. Then in command lines I wrote this:

purelin(net.LW{2}*tansig(net.IW{1}*inputTest+net.b{1})+net.b{2})

其他信息:

我的 net.b[1] 是:-1.16610230053776 1.16667147712026

My net.b[1] is: -1.16610230053776 1.16667147712026

我的 net.b[2] 是:51.3266249426358

My net.b[2] is: 51.3266249426358

net.IW(1) 为:0.344272596370387 0.344111217766824

And net.IW(1) is: 0.344272596370387 0.344111217766824

net.LW(2) 为:31.7635369693519 -31.8082184881063

net.LW(2) is: 31.7635369693519 -31.8082184881063

当我的 inputTest 是 3 时,这个命令的结果是 16,而应该是 9 左右.我是不是哪里出错了?

When my inputTest is 3, the result of this command is 16, while it should be about 9. Have I made an error somewhere?

我找到了 Stack Overflow 帖子MATLAB 中的神经网络这包含一个像我的问题一样的问题,但有一点不同,不同之处在于输入和输出的范围是相同的,但在我的问题中不是.该解决方案表示我需要扩展结果,但如何扩展结果?

I found the Stack Overflow post Neural network in MATLAB that contains a problem like my problem, but there is a little difference, and the differences is in that problem the ranges of input and output are same, but in my problem is no. That solution says I need to scale out the results, but how can I scale out my result?

推荐答案

关于缩放,您说得对.正如链接答案中所述,神经网络默认将输入和输出缩放到范围 [-1,1].这可以在网络处理功能配置中看到:

You are right about scaling. As was mentioned in the linked answer, the neural network by default scales the input and output to the range [-1,1]. This can be seen in the network processing functions configuration:

>> net = fitnet(2);

>> net.inputs{1}.processFcns
ans =
    'removeconstantrows'    'mapminmax'

>> net.outputs{2}.processFcns
ans =
    'removeconstantrows'    'mapminmax'

应用于输入/输出的第二个预处理函数是mapminmax 带有以下参数:

The second preprocessing function applied to both input/output is mapminmax with the following parameters:

>> net.inputs{1}.processParams{2}
ans =
    ymin: -1
    ymax: 1

>> net.outputs{2}.processParams{2}
ans =
    ymin: -1
    ymax: 1

将两者都映射到 [-1,1] 范围内(训练前).

to map both into the range [-1,1] (prior to training).

这意味着经过训练的网络期望输入值在此范围内,并且输出值也在相同范围内.如果您想手动将输入提供给网络,并自己计算输出,则必须在输入处缩放数据,并在输出处反转映射.

This means that the trained network expects input values in this range, and outputs values also in the same range. If you want to manually feed input to the network, and compute the output yourself, you have to scale the data at input, and reverse the mapping at the output.

最后要记住的一件事是,每次训练 ANN 时,您都会得到不同的权重.如果您想要可重现的结果,则需要修复随机数生成器的状态(每次使用相同的种子对其进行初始化).阅读有关 rngRandStream 等函数的文档.

One last thing to remember is that each time you train the ANN, you will get different weights. If you want reproducible results, you need to fix the state of the random number generator (initialize it with the same seed each time). Read the documentation on functions like rng and RandStream.

你还需要注意,如果你将数据分成训练/测试/验证集,你每次必须使用相同的分割(可能也受到我提到的随机性方面的影响).

You also have to pay attention that if you are dividing the data into training/testing/validation sets, you must use the same split each time (probably also affected by the randomness aspect I mentioned).

这里有一个例子来说明这个想法(改编自我的另一篇文章):

Here is an example to illustrate the idea (adapted from another post of mine):

%%# data
x = linspace(-71,71,200);            %# 1D input
y_model = x.^2;                      %# model
y = y_model + 10*randn(size(x)).*x;  %# add some noise

%%# create ANN, train, simulate
net = fitnet(2);                     %# one hidden layer with 2 nodes
net.divideFcn = 'dividerand';
net.trainParam.epochs = 50;
net = train(net,x,y);
y_hat = net(x);

%%# plot
plot(x, y, 'b.'), hold on
plot(x, x.^2, 'Color','g', 'LineWidth',2)
plot(x, y_hat, 'Color','r', 'LineWidth',2)
legend({'data (noisy)','model (x^2)','fitted'})
hold off, grid on

%%# manually simulate network
%# map input to [-1,1] range
[~,inMap] = mapminmax(x, -1, 1);
in = mapminmax('apply', x, inMap);

%# propagate values to get output (scaled to [-1,1])
hid = tansig( bsxfun(@plus, net.IW{1}*in, net.b{1}) ); %# hidden layer
outLayerOut = purelin( net.LW{2}*hid + net.b{2} );     %# output layer

%# reverse mapping from [-1,1] to original data scale
[~,outMap] = mapminmax(y, -1, 1);
out = mapminmax('reverse', outLayerOut, outMap);

%# compare against MATLAB output
max( abs(out - y_hat) )        %# this should be zero (or in the order of `eps`)

我选择使用 mapminmax 函数,但您也可以手动完成.该公式是一个非常简单的线性映射:

I opted to use the mapminmax function, but you could have done that manually as well. The formula is a pretty simply linear mapping:

y = (ymax-ymin)*(x-xmin)/(xmax-xmin) + ymin;

这篇关于自己尝试在MATLAB中模拟一个神经网络的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆