根据神经网络的权重和偏差推导方程 [英] Deriving equation by weights and biases from a neural network

查看:370
本文介绍了根据神经网络的权重和偏差推导方程的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用一个大型数据库进行了神经网络的测试,并在测试中得到了很好的答案(很小的错误-接近4%).现在,我想使用权重和偏差来导出方程式,以便直接获得我的输出(无需再次使用该网络).如何导出方程式以获取输出?

I carried out a neural network with a large database and got great answer in testing it (very small error - nearly 4%). Now I want to using weights and biases to derive an equation in order to get my outputs directly (without usage of that network again). How can I derive an equation to get my outputs?

推荐答案

这取决于您使用的是哪种神经网络. 如果这是一个简单的前馈网络,则导出公式所需要做的就是通过激活函数将权重矩阵和偏差向量传播到输入中.

Well it depends on which kind of neural network you are using. If it's a simple feedforward network, then all you need to do to derive your formula is to propagate your inputs with the weight matrices and the bias vectors through the activation functions.

假设您有一个SLFN(单层前馈网络),这基本上意味着您有一个输入层,一个隐藏层和一个输出层.

Let's say you have an SLFN (Single Layer Feedforward Network) which basically means you have an input layer, a hidden layer and an output layer.

让我们表示:

  • 输入向量 X
  • 输入和隐藏的 W_ih
  • 之间的权重矩阵
  • 隐藏层 b
  • 上的矢量
  • 隐藏节点 f
  • 上的激活功能
  • 隐藏层的输出 Y_h
  • 隐藏和输出 W_ho
  • 之间的权重矩阵
  • 输出向量 Y
  • input vector X
  • weight matrix between input and hidden W_ih
  • bias vector on hidden layer b
  • activation function on hidden nodes f
  • output of the hidden layer Y_h
  • weight matrix between hidden and output W_ho
  • output vector Y

计算输出的步骤是:

1-通过与隐藏层的连接传播输入向量,并添加偏差项.这样,您就可以输入总输入 Z 来输入"隐藏层(有时称为"logit"):

1- Propagate your input vector through the connections to the hidden layer and add the bias terms. This gives you the total input Z "entering" the hidden layer (which is sometimes called the "logit"):

Z = X * W_ih + B

其中 B 是每行等于向量 b 的矩阵,并且具有与输入用例一样多的行.

where B is the matrix that has every row equal to the vector b, and as many rows as you have input cases.

2-将激活功能应用于此登录名:

2- Apply the activation function to this logit:

Y_h = f(Z)= f(X * W_ih + B)

3-再次通过与输出层的连接传播此向量,并且您的输出向量 Y 等于:

3- Propagate once more this vector through the connections to the output layer and your output vector Y is equal to:

Y = Y_h * W_ho = f(X * W_ih + B)* W_ho

现在,如果您有更多隐藏层,您要做的就是对每个其他隐藏层重复步骤1和2,并在输出层结束第3步.

Now if you have more hidden layers, all you have to do is repeat steps 1 and 2 for every additional hidden layer and end with step 3 for your output layer.

这篇关于根据神经网络的权重和偏差推导方程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆