如何利用希伯来语学习? [英] How to utilize Hebbian learning?

查看:160
本文介绍了如何利用希伯来语学习?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想升级我的进化模拟器以使用Hebb学习,例如这一个.我基本上希望小动物能够学习如何寻找食物.我通过基本的前馈网络实现了这一目标,但是我一直坚持了解如何通过Hebb学习来做到这一点.赫布学习的基本原理是,如果两个神经元一起发射,它们会连接在一起.

I want to upgrade my evolution simulator to use Hebb learning, like this one. I basically want small creatures to be able to learn how to find food. I achieved that with the basic feedforward networks, but I'm stuck at understanding how to do it with Hebb learning. The basic principle of Hebb learning is that, if two neurons fire together, they wire together.

因此,权重是这样更新的:

So, the weights are updated like this:

weight_change = learning_rate * input * output

我发现有关如何有用的信息非常匮乏,我不明白.

The information I've found on how this can be useful is pretty scarce, and I don't get it.

在我当前的模拟器版本中,当一个生物吃掉一块食物时,动作和输入(运动,眼睛)之间的权重会增加,而我看不到如何将其转化为这个新模型.在这里,根本没有空间可以判断它是对还是错,因为仅有的参数是输入和输出!基本上,如果一个输入激活一个方向的运动,无论该生物是否在吃东西,体重都会不断增加!

In my current version of the simulator, the weights between an action and an input (movement, eyes) are increased when a creature eats a piece of food, and I fail to see how that can translate into this new model. There simply is no room to tell if it did something right or wrong here, because the only parameters are input and output! Basically, if one input activates movement in one direction, the weight would just keep on increasing, no matter if the creature is eating something or not!

我用错误的方式应用Hebb学习吗?仅供参考,我使用的是Python.

Am I applying Hebb learning in a wrong way? Just for reference, I'm using Python.

推荐答案

Hebbs lawassociative learning的绝妙见解,但仅是图片的一部分.而且您是正确的,已实施,并且未经检查就不会增加权重.关键是添加某种形式的规范化或限制过程.在 Oja规则的Wiki页面上对此进行了很好的说明.我建议您做的是在post-synaptic divisive normalisation步骤中添加,这意味着您将权重除以会聚在同一突触后神经元上的所有权重之和(即,所有权重在神经元上收敛的总和)固定为1).

Hebbs law is a brilliant insight for associative learning, but its only part of the picture. And you are right, implemented as you have done, and left unchecked a weight will just keep on increasing. The key is to add in some form of normalisation or limiting process. This is illustrated quite well of the wiki page for Oja's rule. What I suggest you do is add in a post-synaptic divisive normalisation step, what this means is that you divide through a weight by the sum of all the weights converging on the same post-synaptic neuron (i.e. the sum of all weights converging on a neuron is fixed at 1).

您想要做的事情可以通过构建利用Hebbian learning的网络来完成.对于您作为系统输入传递的内容或设置方式,我不太确定.但是您可以查看 LISSOM ,它是 SOM(自组织地图).

What you want to do can be done by building a network that utilises Hebbian learning. I'm not quite sure on what you are passing in as input into your system, or how you've set things up. But you could look at LISSOM which is an Hebbian extension to SOM, (self-organising map).

在这种类型的层中,通常所有神经元都可以互连.您传入输入向量,并允许网络中的活动稳定下来,这是一些稳定步骤.然后,您更新权重.您可以在训练阶段进行此操作,在此阶段的最后,输入空间中的关联项目将倾向于在输出映射中形成分组的活动补丁.

In a layer of this kind typically all the neurons may be interconnected. You pass in the input vector, and allow the activity in the network to settle, this is some number of settling steps. Then you update the weights. You do this during the training phase, at the end of which associated items in the input space will tend to form grouped activity patches in the output map.

还值得注意的是,大脑是高度互连的,并且具有高度递归性(即,也存在前馈,反馈,横向互连,微电路以及许多其他东西.)

It's also worth noting that the brain is massively interconnected, and highly recursive (i.e. there is feedforward, feedback, lateral interconnectivity, microcircuits, and a lot of other stuff too..).

这篇关于如何利用希伯来语学习?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆