PyBrain:如何在神经网络中放置特定权重? [英] PyBrain:How can I put specific weights in a neural network?

查看:218
本文介绍了PyBrain:如何在神经网络中放置特定权重?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图根据给定的事实重建神经网络.它具有3个输入,一个隐藏层和一个输出.我的问题是还给出了权重,所以我不需要训练.

我当时在想也许可以节省对结构神经网络中类似结构的训练,并相应地更改值.您认为这行得通吗?还有其他想法.谢谢

神经网络代码:

    net = FeedForwardNetwork()
    inp = LinearLayer(3)
    h1 = SigmoidLayer(1)
    outp = LinearLayer(1)

    # add modules
    net.addOutputModule(outp)
    net.addInputModule(inp)
    net.addModule(h1)

    # create connections
    net.addConnection(FullConnection(inp, h1))
    net.addConnection(FullConnection(h1, outp))

    # finish up
    net.sortModules()


    trainer = BackpropTrainer(net, ds)
    trainer.trainUntilConvergence()

如何保存和恢复PyBrain培训中保存培训并加载代码? /a>

# Using NetworkWriter

from pybrain.tools.shortcuts import buildNetwork
from pybrain.tools.xml.networkwriter import NetworkWriter
from pybrain.tools.xml.networkreader import NetworkReader

net = buildNetwork(2,4,1)

NetworkWriter.writeToFile(net, 'filename.xml')
net = NetworkReader.readFrom('filename.xml') 

我很好奇如何完成已经训练有素的网络(使用xml工具)的读取.因为,这意味着可以以某种方式设置网络权重.因此,在 NetworkReader文档中,我发现可以使用.

但是,下划线表示私有方法,可能有一些副作用.另外请记住,具有权重的向量的长度必须与原始构造的网络的长度相同.

示例

>>> import numpy
>>> from pybrain.tools.shortcuts import buildNetwork
>>> net = buildNetwork(2,3,1)
>>> net.params

array([...some random values...])

>>> len(net.params)

13

>>> new_params = numpy.array([1.0]*13)
>>> net._setParameters(new_params)
>>> net.params

array([1.0, ..., 1.0])

其他重要的事情是按正确的顺序放置值.例如,上面是这样的:

[  1., 1., 1., 1., 1., 1.,      1., 1., 1.,        1.,       1., 1., 1.    ] 
     input->hidden0            hidden0->out     bias->out   bias->hidden0   

要确定哪些权重属于层之间的哪些连接,请尝试

# net is our neural network from previous example
for c in [connection for connections in net.connections.values() for connection in connections]:
    print("{} -> {} => {}".format(c.inmod.name, c.outmod.name, c.params))

无论如何,我仍然不知道各层之间权重的确切顺序...

I am trying to recreate a neural network based on given facts.It has 3 inputs,a hidden layer and an output.My problem is that the weights are also given,so I don't need to train.

I was thinking maybe I could save the trainning of a similar in structure neural network and change the values accordingly.Do you think that will work?Any other ideas.Thanks.

Neural Network Code:

    net = FeedForwardNetwork()
    inp = LinearLayer(3)
    h1 = SigmoidLayer(1)
    outp = LinearLayer(1)

    # add modules
    net.addOutputModule(outp)
    net.addInputModule(inp)
    net.addModule(h1)

    # create connections
    net.addConnection(FullConnection(inp, h1))
    net.addConnection(FullConnection(h1, outp))

    # finish up
    net.sortModules()


    trainer = BackpropTrainer(net, ds)
    trainer.trainUntilConvergence()

Save training and load code from How to save and recover PyBrain training?

# Using NetworkWriter

from pybrain.tools.shortcuts import buildNetwork
from pybrain.tools.xml.networkwriter import NetworkWriter
from pybrain.tools.xml.networkreader import NetworkReader

net = buildNetwork(2,4,1)

NetworkWriter.writeToFile(net, 'filename.xml')
net = NetworkReader.readFrom('filename.xml') 

解决方案

I was curious how reading already trained network (with xml tool) is done. Because, that means network weights can be somehow set. So in NetworkReader documentation I found, that you can set parameters with _setParameters().

However that underscore means private method which could have potentially some side effects. Also keep in mind, that vector with weights must be same length as originally constructed network.

Example

>>> import numpy
>>> from pybrain.tools.shortcuts import buildNetwork
>>> net = buildNetwork(2,3,1)
>>> net.params

array([...some random values...])

>>> len(net.params)

13

>>> new_params = numpy.array([1.0]*13)
>>> net._setParameters(new_params)
>>> net.params

array([1.0, ..., 1.0])

Other important thing is to put values in right order. For example above it's like this:

[  1., 1., 1., 1., 1., 1.,      1., 1., 1.,        1.,       1., 1., 1.    ] 
     input->hidden0            hidden0->out     bias->out   bias->hidden0   

To determine which weights belongs to which connections between layers, try this

# net is our neural network from previous example
for c in [connection for connections in net.connections.values() for connection in connections]:
    print("{} -> {} => {}".format(c.inmod.name, c.outmod.name, c.params))

Anyway, I still don't know exact order of weights between layers...

这篇关于PyBrain:如何在神经网络中放置特定权重?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆