给定足够多的隐藏神经元,神经网络能否逼近任何函数? [英] Can neural networks approximate any function given enough hidden neurons?

查看:18
本文介绍了给定足够多的隐藏神经元,神经网络能否逼近任何函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道具有任意数量隐藏层的神经网络可以逼近非线性函数,但是,它可以逼近:

I understand neural networks with any number of hidden layers can approximate nonlinear functions, however, can it approximate:

f(x) = x^2

我想不出怎么会这样.这似乎是神经网络的一个非常明显的局限性,它可能会限制它的功能.例如,由于这种限制,神经网络可能无法正确逼近许多用于统计的函数,例如指数移动平均数,甚至方差.

I can't think of how it could. It seems like a very obvious limitation of neural networks that can potentially limit what it can do. For example, because of this limitation, neural networks probably can't properly approximate many functions used in statistics like Exponential Moving Average, or even variance.

说到移动平均线,循环神经网络能正确地近似吗?我了解前馈神经网络甚至单个线性神经元如何使用滑动窗口技术输出移动平均值,但是如果没有 X 数量的隐藏层(X 是移动平均大小),循环神经网络将如何做到这一点?

Speaking of moving average, can recurrent neural networks properly approximate that? I understand how a feedforward neural network or even a single linear neuron can output a moving average using the sliding window technique, but how would recurrent neural networks do it without X amount of hidden layers (X being the moving average size)?

另外,假设我们不知道原始函数f,它恰好得到最后500个输入的平均值,然后如果大于3则输出1,而0如果不是.但是,请假装我们不知道,这是一个黑匣子.

Also, let us assume we don't know the original function f, which happens to get the average of the last 500 inputs, and then output a 1 if it's higher than 3, and 0 if it's not. But for a second, pretend we don't know that, it's a black box.

循环神经网络如何近似?我们首先需要知道它应该有多少个时间步长,而我们没有.也许 LSTM 网络可以,但即便如此,如果它不是简单的移动平均线,而是指数移动平均线呢?我觉得连 LSTM 都做不到.

How would a recurrent neural network approximate that? We would first need to know how many timesteps it should have, which we don't. Perhaps a LSTM network could, but even then, what if it's not a simple moving average, it's an exponential moving average? I don't think even LSTM can do it.

更糟糕的是,如果我们试图学习的 f(x,x1) 只是

Even worse still, what if f(x,x1) that we are trying to learn is simply

f(x,x1) = x * x1

这看起来非常简单明了.神经网络可以学习吗?我不明白怎么做.

That seems very simple and straightforward. Can a neural network learn it? I don't see how.

我在这里遗漏了一些重要的东西还是机器学习算法非常有限?除了神经网络之外,还有其他学习技术可以真正做到这些吗?

Am I missing something huge here or are machine learning algorithms extremely limited? Are there other learning techniques besides neural networks that can actually do any of this?

推荐答案

理解的重点是compact:

神经网络(如多项式、样条或径向基函数等任何其他近似结构)只能在紧致集内近似任何连续函数.

Neural networks (as any other approximation structure like, polynomials, splines, or Radial Basis Functions) can approximate any continuous function only within a compact set.

换句话说,该理论指出,给定:

In other words the theory states that, given:

  1. 连续函数f(x),
  2. 输入 x[a,b]
  3. 的有限范围
  4. 所需的近似精度ε>0,
  1. A continuous function f(x),
  2. A finite range for the input x, [a,b], and
  3. A desired approximation accuracy ε>0,

那么存在一个近似f(x)的神经网络,其近似误差小于ε,在[a,b]内的任何地方都存在强>.

then there exists a neural network that approximates f(x) with an approximation error less than ε, everywhere within [a,b].

关于您的 f(x) = x2 示例,是的,您可以在任何有限范围内使用神经网络对其进行近似:[-1,1][0, 1000] 等.为了形象化这一点,假设您在 [-1,1 内近似 f(x)] 带有 阶梯函数.你能在纸上做吗?请注意,如果您使步骤足够窄,则可以达到任何所需的精度.神经网络逼近 f(x) 的方式与此没有太大区别.

Regarding your example of f(x) = x2, yes you can approximate it with a neural network within any finite range: [-1,1], [0, 1000], etc. To visualise this, imagine that you approximate f(x) within [-1,1] with a Step Function. Can you do it on paper? Note that if you make the steps narrow enough you can achieve any desired accuracy. The way neural networks approximate f(x) is not much different than this.

但同样,没有具有有限数量参数的神经网络(或任何其他近似结构)可以近似 f(x) = x2x[-∞, +∞] 中.

But again, there is no neural network (or any other approximation structure) with a finite number of parameters that can approximate f(x) = x2 for all x in [-∞, +∞].

这篇关于给定足够多的隐藏神经元,神经网络能否逼近任何函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆