液体状态机:它是如何工作的以及如何使用? [英] Liquid State Machine: How it works and how to use it?

查看:582
本文介绍了液体状态机:它是如何工作的以及如何使用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我现在正在学习LSM(液体状态机),我试图了解它们是如何用于学习的.

I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning.

从网上阅读的内容让我很困惑.

I am pretty confused from what I read over the web.

我会写出我所理解的->这可能是不正确的,如果您能纠正我并解释真实的情况,我会很高兴:

I'll write what I understood -> It may be incorrect and I'll be glad if you can correct me and explain what is true:

  1. LSM根本没有经过训练:它们只是用许多时间神经元"(例如Leaky Integrate& Fire神经元)初始化的,而它们的阈值是随机选择的,因此它们之间的联系(即不是每个神经元必须与其他每个神经元有共同的优势.)

  1. LSMs are not trained at all: They are just initialized with many "temporal neurons" (e.g. Leaky Integrate & Fire neurons), while their thresholds are selected randomly, and so the connections between them (i.e. not each neuron has to have a common edge with each of the other neurons).

如果要在输入 I 后学习" x 个时间单位,则发生 Y ,您需要与LIF探测器"等待" x 时间单位,并查看在此特定时刻触发了哪些神经元.然后,您可以训练一个分类器(例如FeedForward Network),该特定的放电神经元子集表示发生了 Y 事件.

If you want to "learn" that x time-units after inputting I, the occurrence Y occurs, you need to "wait" x time-units with the LIF "detectors", and see which neurons fired at this specific moment. Then, you can train a classifier (e.g. FeedForward Network), that this specific subset of firing neurons means that the occurrence Y happened.

您可能在液体"中使用了许多时间神经元",因此您可能有许多可能的激发神经元子集,因此在等待之后的那一刻,特定的激发神经元子集变得几乎唯一 I

You may use many "temporal neurons" in your "liquid", so you may have many possible different subsets of firing neurons, so a specific subset of firing neurons becomes almost unique for the moment after you waited x time-units, after inputting your input I

我不知道我上面写的是真的,还是全部垃圾.

I don't know whether what I wrote above is true, or whether it is a total garbage.

请告诉我这是否是LIF的正确用法和目标.

Please tell me if this is the correct usage and targets of LIF.

推荐答案

从您的问题来看,您似乎处在正确的轨道上.无论如何,液态状态机和回声状态机是涉及计算神经科学和物理学的复杂主题,诸如混沌,动态动作系统以及反馈系统和机器学习之类的主题.因此,如果您觉得很难用头包住,那就可以了.

From your questions, it seems that you are on the right track. Anyhow, the Liquid State Machine and Echo State machine are complex topics that deal with computational neuroscience and physics, topics like chaos, dynamic action system, and feedback system and machine learning. So, it’s ok if you feel like it’s hard to wrap your head around it.

要回答您的问题:

  1. 使用未经训练的神经元存储库的大多数状态机实现.已经进行了一些训练水库的尝试,但是并没有取得可证明实现此目标所需的计算能力合理的戏剧性成功. (请参阅:用于递归神经网络训练的储层计算方法)或(并行感知器的p-Delta学习规则)

    我的观点是,如果您要使用Liquid作为分类器的模式可分离性或泛化性,则可以从神经元彼此之间的连接方式中获得更多收益(请参见使用哪种模型用于液体状态机吗?) 生物方法(在我看来,是最有趣的一种方法)(神经元可以用穗学习什么? -计时相关的可塑性?)
  2. 您是对的,您至少需要等到完成输入后再进行操作,否则就有可能检测到您的输入,而不是由于输入而导致的活动应有的风险.
  3. 是的,您可以想象您的液体复杂性是SVM中的一个内核,该内核试图将数据点投影到某些超空间中,而液体中的检测器则作为试图在数据集中的各个类之间进行分离的部分.根据经验,神经元的数量及其相互连接的方式决定了液体的复杂程度.
  1. Most implementations of Liquid State Machines using the reservoir of neurons untrained. There have been some attempts to train the reservoir but they haven't had the dramatic success that justifies the computational power that is needed for this aim. (See: Reservoir Computing Approaches to Recurrent Neural Network Training) or (The p-Delta Learning Rule for Parallel Perceptrons )

    My opinion is that if you want to use the Liquid as classifier in terms of separability or generalization of pattern, you can gain much more from the way the neurons connect between each other (see Hazan, H. and Manevitz, L., Topological constraints and robustness in liquid state machines, Expert Systems with Applications, Volume 39, Issue 2, Pages 1597-1606, February 2012.) or (Which Model to Use for the Liquid State Machine?) The biological approach (in my opinion the most interesting one) (What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? )
  2. You are right, you need to wait at least until you finish giving the input, otherwise you risk in detect your input, and not the activity that occurs as a result from your input as it should be.
  3. Yes, you can imagine that your liquid complexity is a kernel in SVM that try to project the data points to some hyperspace and the detector in the liquid as the part that try to separate between the classes in the dataset. As a rule of the thumb, the number of neurons and the way they connect between each other determine the degree of complexity of the liquid.

关于LIF(泄漏整合与激发神经元),我认为(可能是错误的),这两种方法的最大区别在于单个单元.在液体状态下,机器使用生物类神经元,而在回声状态下,机器使用更多的模拟单元.因此,就非常短期记忆"而言,液态方法使每个单个神经元记住其自身的历史,而在回声态方法中,每个单个神经元仅根据当前状态做出反应,因此存储在两个状态之间的活动中的记忆单位.

Regarding LIF (Leaky Integrate & Fire neurons), as I see it (I could be wrong) the big difference between the two approaches is the individual unit. In liquid state machine uses biological like neurons, and in the Echo state uses more analog units. So, in terms of "very short term memory" the Liquid State approach each individual neuron remembers its own history, where in the Echo state approach each individual neuron reacts based only on the current state, and therefore the memory stored in the activity between the units.

这篇关于液体状态机:它是如何工作的以及如何使用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆