神经网络:输入层是否包含神经元? [英] Neural Networks: Does the input layer consist of neurons?

查看:456
本文介绍了神经网络:输入层是否包含神经元?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在研究神经网络理论,并且发现到处都有它由以下几层组成:

  • 输入层
  • 隐藏层
  • 输出层

我看到一些图形描述将输入层显示为网络中的真实节点,而另一些图形显示则将该输入层显示为值的向量 [x1,x2,... xn]

正确的结构是什么?

输入层"是神经元的真实层吗?还是只是抽象地命名为图层,而实际上只是输入向量?

添加我在网上发现的矛盾和混乱的照片:

在这里看起来输入层由神经元组成:

在这里看起来输入图层只是一个输入向量:

解决方案

让我用一些数学符号来回答您的问题,这将使它比随机图像更容易理解.首先,请记住感知器.

感知器的任务是找到一个决策函数,它将给定集合中的某些点分类为n个类.因此,对于功能

f : R^n -> R , f(X) = <W, X> + b

其中W是权重的向量,而X是点的向量.例如,如果您有一条由公式3x + y = 0定义的线,则W为(3,1),X为(x,y).

神经网络可以看作是一个图,其中图的每个顶点都是一个简单的感知器-也就是说,网络中的每个节点都不过是一个函数,该函数具有一些值并输出一个新值,该函数可以然后用于下一个节点.在第二张图片中,这将是两个隐藏层.

这些节点需要什么作为输入?一组W和Xs-权重和点向量.图像中的哪个用x0, x1, .. xnw0, w1, .. wn表示.

最终,我们可以得出结论,神经网络需要起作用的是一组权重和点的输入向量.

我对您的总体建议是为您的学习选择一个来源,并坚持使用该来源,而不是通过具有冲突图像的Internet进行访问.

I currently study the Neural Networks theory and I see that everywhere it is written that it consists of the following layers:

  • Input Layer
  • Hidden Layer(s)
  • Output Layer

I see some graphical descriptions that show the input layer as real nodes in the net, while others show this layer as just a vector of values [x1, x2, ... xn]

What is the correct structure?

Is the "input layer" a real layer of neurons? Or is this just abstractly named as layer, while it really is just the input vector?

Adding contradicting and confusing photos I found in the web:

Here it looks like the input layer consists of neurons:

Here it looks like the input layer is just an input vector:

解决方案

Let me answer your question with some mathematical notations that will make it easier to understand than just random images. First, remember the Perceptron.

The task of the Perceptron is to find a decision function that will classify some points in a given set into n classes. So, for a function

f : R^n -> R , f(X) = <W, X> + b

where W is a vector of weights, and X is the vector of points. As an example, if you have a line defined by the equation 3x + y = 0 then W is (3,1) and X is (x,y).

A Neural Network can be thought of as a graph where each vertex of the graph is a simple perceptron - that is, each node in the network is nothing but a function that takes in some value and outputs a new one, which could then be used for the next node. In your second image, this would be the two hidden layers.

What then do these nodes need as input? A set of W and Xs - weight and point vectors. Which in your image is expressed by x0, x1, .. xn and w0, w1, .. wn.

Ultimately, we can conclude that what a Neural Network needs to function is a set of input vectors of weights and points.

My overall advice to you would be to pick one source for your learning and stick to that rather than going over the internet with conflicting images.

这篇关于神经网络:输入层是否包含神经元?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆