给定足够多的隐藏神经元,神经网络可以近似任何功能吗? [英] Can neural networks approximate any function given enough hidden neurons?

查看:93
本文介绍了给定足够多的隐藏神经元,神经网络可以近似任何功能吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我了解具有任意数量的隐藏层的神经网络都可以近似非线性函数,但是,它可以近似为:

I understand neural networks with any number of hidden layers can approximate nonlinear functions, however, can it approximate:

f(x) = x^2

我想不到怎么办.看起来神经网络有一个非常明显的局限性,它可能会限制其功能.例如,由于这个限制,神经网络可能无法正确地近似统计中使用的许多函数,例如指数移动平均线甚至方差.

I can't think of how it could. It seems like a very obvious limitation of neural networks that can potentially limit what it can do. For example, because of this limitation, neural networks probably can't properly approximate many functions used in statistics like Exponential Moving Average, or even variance.

说到移动平均线,递归神经网络可以恰当地近似吗?我了解前馈神经网络甚至单个线性神经元如何使用滑动窗口技术输出移动平均值,但是递归神经网络如何在没有X层隐藏层(X为移动平均值大小)的情况下做到这一点?

Speaking of moving average, can recurrent neural networks properly approximate that? I understand how a feedforward neural network or even a single linear neuron can output a moving average using the sliding window technique, but how would recurrent neural networks do it without X amount of hidden layers (X being the moving average size)?

此外,让我们假设我们不知道原始函数 f ,该函数恰好获取最后500个输入的平均值,如果大于3,则输出1,如果大于3,则输出0如果不是这样.但是有一秒钟,假装我们不知道,这是一个黑匣子.

Also, let us assume we don't know the original function f, which happens to get the average of the last 500 inputs, and then output a 1 if it's higher than 3, and 0 if it's not. But for a second, pretend we don't know that, it's a black box.

递归神经网络如何近似呢?我们首先需要知道它应该有多少个时间步长,而我们没有.也许LSTM网络可以,但是即使那样,如果它不是简单的移动平均线,而是指数移动平均线,该怎么办?我认为即使LSTM也无法做到.

How would a recurrent neural network approximate that? We would first need to know how many timesteps it should have, which we don't. Perhaps a LSTM network could, but even then, what if it's not a simple moving average, it's an exponential moving average? I don't think even LSTM can do it.

更糟糕的是,如果我们要学习的 f(x,x1)仅仅是

Even worse still, what if f(x,x1) that we are trying to learn is simply

f(x,x1) = x * x1

这看起来非常简单明了.神经网络可以学习吗?我不知道如何.

That seems very simple and straightforward. Can a neural network learn it? I don't see how.

我在这里错过了很多东西吗?还是机器学习算法受到极大限制?除了神经网络以外,还有其他学习技术可以实际做到这一点吗?

Am I missing something huge here or are machine learning algorithms extremely limited? Are there other learning techniques besides neural networks that can actually do any of this?

推荐答案

要理解的重点是紧凑:

神经网络(如多项式,样条曲线或径向基函数之类的任何其他近似结构)只能在紧集内近似任何连续函数.

Neural networks (as any other approximation structure like, polynomials, splines, or Radial Basis Functions) can approximate any continuous function only within a compact set.

换句话说,理论指出,给定:

In other words the theory states that, given:

  1. 连续函数 f(x)
  2. 输入 x [a,b]
  3. 的有限范围
  4. 所需的近似精度ε> 0
  1. A continuous function f(x),
  2. A finite range for the input x, [a,b], and
  3. A desired approximation accuracy ε>0,

然后存在一个神经网络,它在 [a,b] 内的任何地方都具有近似 f(x)的近似误差小于ε 的神经网络.强>.

then there exists a neural network that approximates f(x) with an approximation error less than ε, everywhere within [a,b].

关于 f(x)= x 2 的示例,是的,您可以使用以下任意有限范围内的神经网络对其进行近似: 1] [0,1000] 等.为直观起见,假设您在 [-1,1]内近似 f(x) ] Step Function .你能在纸上做吗?请注意,如果将步幅缩小得足够小,则可以达到任何所需的精度.神经网络近似 f(x)的方式与此没有太大不同.

Regarding your example of f(x) = x2, yes you can approximate it with a neural network within any finite range: [-1,1], [0, 1000], etc. To visualise this, imagine that you approximate f(x) within [-1,1] with a Step Function. Can you do it on paper? Note that if you make the steps narrow enough you can achieve any desired accuracy. The way neural networks approximate f(x) is not much different than this.

但同样,没有神经网络(或任何其他近似结构)具有有限数量的参数,这些参数可以对所有物体近似 f(x)= x 2 [-∞,+∞] 中的 x .

But again, there is no neural network (or any other approximation structure) with a finite number of parameters that can approximate f(x) = x2 for all x in [-∞, +∞].

这篇关于给定足够多的隐藏神经元,神经网络可以近似任何功能吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆