连续VS离散人工神经网络 [英] Continuous vs Discrete artificial neural networks

查看:482
本文介绍了连续VS离散人工神经网络的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我意识到这可能是一个非常小众的问题,而是有没有人有连续神经网络的工作经验?我在哪一个连续的神经网络可能是VS您通常使用的离散神经网络的有益特别感兴趣。

I realize that this is probably a very niche question, but has anyone had experience with working with continuous neural networks? I'm specifically interested in what a continuous neural network may be useful for vs what you normally use discrete neural networks for.

为了清楚起见,我将澄清什么,我连续神经网络的意思,我想这可能是PTED意味着不同的事情跨$ P $。我做的不可以意味着激活函数是连续的。而我暗示的增加隐层神经元的数量是无限的想法。

For clarity I will clear up what I mean by continuous neural network as I suppose it can be interpreted to mean different things. I do not mean that the activation function is continuous. Rather I allude to the idea of a increasing the number of neurons in the hidden layer to an infinite amount.

所以为了清楚起见,这里是典型的谨慎NN的结构: X 是输入,先按g 是隐层的激活时, v 是隐层的权重,在是W 是输出层的权重,在 B 是偏见,显然输出层有一个线性激活(即无。)

So for clarity, here is the architecture of your typical discreet NN: The x are the input, the g is the activation of the hidden layer, the v are the weights of the hidden layer, the w are the weights of the output layer, the b is the bias and apparently the output layer has a linear activation (namely none.)

离散NN和连续NN之差所描绘的图中: 这是你让隐藏的神经元数目变得无限大,这样你的最终输出是一个整体。在实践中,这意味着,代替计算确定性总和您改为必须接近相应的积分与正交。

The difference between a discrete NN and a continuous NN is depicted by this figure: That is you let the number of hidden neurons become infinite so that your final output is an integral. In practice this means that instead of computing a deterministic sum you instead must approximate the corresponding integral with quadrature.

显然,它是一种常见的误解与隐藏了太多的神经元产生过度拟合神经网络。

Apparently its a common misconception with neural networks that too many hidden neurons produces over-fitting.

我的问题是具体而言,鉴于离散和连续神经网络的这个定义,我想知道如果任何人有经验,后者与什么样的事情,他们用他们的工作。

My question is specifically, given this definition of discrete and continuous neural networks, I was wondering if anyone had experience working with the latter and what sort of things they used them for.

在话题的进一步说明可以在这里找到: <一href="http://www.iro.umontreal.ca/~lisa/seminaires/18-04-2006.pdf">http://www.iro.umontreal.ca/~lisa/seminaires/18-04-2006.pdf

Further description on the topic can be found here: http://www.iro.umontreal.ca/~lisa/seminaires/18-04-2006.pdf

推荐答案

在我已经用连续的神经网络的几个研究项目工作了过去。使用双极双曲谭已完成激活,网络拿了几百浮点输入和输出大约一百浮点值。

In the past I've worked on a few research projects using continuous NN's. Activation was done using a bipolar hyperbolic tan, the network took several hundred floating point inputs and output around one hundred floating point values.

在该特定情况下的网络的目的是要学习的无机系的动力学方程。该网络被赋予的火车和predicted速度,跨货车动力学等一条龙的行为当前状态内有50秒,进入未来。

In this particular case the aim of the network was to learn the dynamic equations of a mineral train. The network was given the current state of the train and predicted speed, inter-wagon dynamics and other train behaviour 50 seconds into the future.

对于这个特殊项目的理由,主要是关于性能。这是正在针对嵌入式设备和评估NN是更友好的表现,然后解决传统的ODE(常微分方程)系统。

The rationale for this particular project was mainly about performance. This was being targeted for an embedded device and evaluating the NN was much more performance friendly then solving a traditional ODE (ordinary differential equation) system.

在一般连续NN应该能够学习任何一种功能。这是特别有用时其不可能/极难解决使用确定性方法的系统。相对于二进制网络这往往用于模式识别/分类目的

In general a continuous NN should be able to learn any kind of function. This is particularly useful when its impossible/extremely difficult to solve a system using deterministic methods. As opposed to binary networks which are often used for pattern recognition/classification purposes.

鉴于其不确定性自然神经网络的任何一种都敏感的野兽,选择合适类型的输入/网络架构可以是有点黑色艺术。

Given their non-deterministic nature NN's of any kind are touchy beasts, choosing the right kinds of inputs/network architecture can be somewhat a black art.

这篇关于连续VS离散人工神经网络的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆