有人可以解释人工神经网络吗? [英] Can someone explain Artificial Neural Networks?

查看:91
本文介绍了有人可以解释人工神经网络吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

根据 Wikipedia (我知道这是一个不好的消息来源),其中包含一个神经网络的

According to Wikipedia (which is a bad source, I know) A neural network is comprised of

  • A神经元的输入层

  • An input layer of A neurons

多个(B)隐层,每个隐层由C个神经元组成.

Multiple (B) Hidden layers each comprised of C neurons.

"D"神经元的输出层.

An output layer of "D" neurons.

我了解输入和输出层的含义.

I understand what does input and output layers mean.

我的问题是如何确定最佳的层数和每层神经元?

My question is how to determine an optimal amount of layers and neuron-per-layer?

  • 增加"B"的优点/缺点是什么?
  • 增加"C"的优点/缺点是什么?
  • 增加"B"与"C"有什么区别?

是时间量(处理能力的限制)还是将网络变得更深限制了结果的质量,我应该更多地关注深度(更多层)还是宽度(每层更多神经元)?

Is it only the amount of time (limits of processing power) or will making the network deeper limit quality of results and should I focus more on depth (more layers) or on breadth (more neurons per layer)?

推荐答案

答案1.一层将对大多数问题进行建模,或者最多可以使用两层.

Answer 1. One Layer will model most of the problems OR at max two layers can be used.

答案2.如果使用的神经元数量不足,则网络将无法对复杂数据进行建模,并且拟合效果将很差.如果使用过多的神经元,则训练时间可能会变得过长,更糟糕的是,网络可能会过度拟合数据.当 过度拟合 $发生时,网络将开始对数据中的随机噪声建模.结果是该模型非常适合训练数据,但是对新的,看不见的数据却无法很好地推广.必须使用验证来对此进行测试.

Answer 2. If an inadequate number of neurons are used, the network will be unable to model complex data, and the resulting fit will be poor. If too many neurons are used, the training time may become excessively long, and, worse, the network may over fit the data. When overfitting $ occurs, the network will begin to model random noise in the data. The result is that the model fits the training data extremely well, but it generalizes poorly to new, unseen data. Validation must be used to test for this.

$什么是过度拟合?

在统计中,当统计模型描述随机误差或噪声而不是基本关系时,就会发生过度拟合.通常,当模型过于复杂时(例如,相对于观察次数的参数太多),会发生过度拟合.过度拟合的模型通常会具有较差的预测性能,因为它可能会夸大数据中的细微波动. 过度拟合的概念在机器学习中很重要.通常,使用一些训练示例集(即,期望输出已知的示例性情况)来训练学习算法.假定学习者达到了一个状态,在该状态下还可以预测其他示例的正确输出,从而将其推广到训练期间未出现的情况(基于其归纳偏差).但是,特别是在学习时间过长或训练实例很少的情况下,学习者可能会调整为训练数据的非常具体的随机特征,这些特征与目标功能没有因果关系.在过度拟合的过程中,训练样本的性能仍然会提高,而看不见数据的性能会变得更差.

In statistics, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data. The concept of overfitting is important in machine learning. Usually a learning algorithm is trained using some set of training examples, i.e. exemplary situations for which the desired output is known. The learner is assumed to reach a state where it will also be able to predict the correct output for other examples, thus generalizing to situations not presented during training (based on its inductive bias). However, especially in cases where learning was performed too long or where training examples are rare, the learner may adjust to very specific random features of the training data, that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse.

答案3.阅读答案1& 2.

Answer 3. Read Answer 1 & 2.

Supervised Learning文章将使您更深入地了解对于包括Neural Netowrks在内的任何Supervised学习系统相对重要的因素是什么.本文讨论输入空间的维数,训练数据量,噪声等.

Supervised Learning article on wikipedia (http://en.wikipedia.org/wiki/Supervised_learning) will give you more insight on what are the factors which are relly important with respect to any supervised learning system including Neural Netowrks. The article talks about Dimensionality of Input Space, Amount of training data, Noise etc.

这篇关于有人可以解释人工神经网络吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆