PyBrain:如何在神经网络中放置特定的权重? [英] PyBrain:How can I put specific weights in a neural network?

查看:28
本文介绍了PyBrain:如何在神经网络中放置特定的权重?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试根据给定的事实重新创建一个神经网络.它有 3 个输入、一个隐藏层和一个输出.我的问题是权重也给定了,所以我不需要训练.

我在想,也许我可以保存类似结构神经网络的训练并相应地更改值.您认为这行得通吗?还有其他想法吗.谢谢.

神经网络代码:

 net = FeedForwardNetwork()inp = 线性层(3)h1 = SigmoidLayer(1)输出 = 线性层(1)# 添加模块net.addOutputModule(输出)net.addInputModule(inp)net.addModule(h1)# 创建连接net.addConnection(FullConnection(inp, h1))net.addConnection(FullConnection(h1, outp))# 完事net.sortModules()训练师 = BackpropTrainer(net, ds)trainer.trainUntilConvergence()

保存来自 如何保存和恢复 PyBrain 训练?

# 使用 NetworkWriter从 pybrain.tools.shortcuts 导入 buildNetwork从 pybrain.tools.xml.networkwriter 导入 NetworkWriter从 pybrain.tools.xml.networkreader 导入 NetworkReadernet = buildNetwork(2,4,1)NetworkWriter.writeToFile(net, 'filename.xml')net = NetworkReader.readFrom('文件名.xml')

解决方案

我很好奇阅读已经训练好的网络(使用 xml 工具)是如何完成的.因为,这意味着可以以某种方式设置网络权重.所以在 NetworkReader 文档 中我发现,您可以使用 _setParameters().

然而,下划线意味着私有方法,它可能有一些潜在的副作用.还要记住,带有权重的向量必须与最初构建的网络长度相同.

示例

<预><代码>>>>导入 numpy>>>从 pybrain.tools.shortcuts 导入 buildNetwork>>>net = buildNetwork(2,3,1)>>>网络参数数组([...一些随机值...])>>>len(net.params)13>>>new_params = numpy.array([1.0]*13)>>>net._setParameters(new_params)>>>网络参数数组([1.0, ..., 1.0])

另一个重要的事情是将值按正确的顺序排列.例如上面是这样的:

[ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1. ]输入->hidden0 hidden0->outbias->outbias->hidden0

要确定哪些权重属于层之间的哪些连接,试试这个

# net 是我们之前例子中的神经网络对于 c 中的 [connection for connection in net.connections.values() for connection in connections]:print("{} -> {} => {}".format(c.inmod.name, c.outmod.name, c.params))

无论如何,我仍然不知道层之间权重的确切顺序......

I am trying to recreate a neural network based on given facts.It has 3 inputs,a hidden layer and an output.My problem is that the weights are also given,so I don't need to train.

I was thinking maybe I could save the trainning of a similar in structure neural network and change the values accordingly.Do you think that will work?Any other ideas.Thanks.

Neural Network Code:

    net = FeedForwardNetwork()
    inp = LinearLayer(3)
    h1 = SigmoidLayer(1)
    outp = LinearLayer(1)

    # add modules
    net.addOutputModule(outp)
    net.addInputModule(inp)
    net.addModule(h1)

    # create connections
    net.addConnection(FullConnection(inp, h1))
    net.addConnection(FullConnection(h1, outp))

    # finish up
    net.sortModules()


    trainer = BackpropTrainer(net, ds)
    trainer.trainUntilConvergence()

Save training and load code from How to save and recover PyBrain training?

# Using NetworkWriter

from pybrain.tools.shortcuts import buildNetwork
from pybrain.tools.xml.networkwriter import NetworkWriter
from pybrain.tools.xml.networkreader import NetworkReader

net = buildNetwork(2,4,1)

NetworkWriter.writeToFile(net, 'filename.xml')
net = NetworkReader.readFrom('filename.xml') 

解决方案

I was curious how reading already trained network (with xml tool) is done. Because, that means network weights can be somehow set. So in NetworkReader documentation I found, that you can set parameters with _setParameters().

However that underscore means private method which could have potentially some side effects. Also keep in mind, that vector with weights must be same length as originally constructed network.

Example

>>> import numpy
>>> from pybrain.tools.shortcuts import buildNetwork
>>> net = buildNetwork(2,3,1)
>>> net.params

array([...some random values...])

>>> len(net.params)

13

>>> new_params = numpy.array([1.0]*13)
>>> net._setParameters(new_params)
>>> net.params

array([1.0, ..., 1.0])

Other important thing is to put values in right order. For example above it's like this:

[  1., 1., 1., 1., 1., 1.,      1., 1., 1.,        1.,       1., 1., 1.    ] 
     input->hidden0            hidden0->out     bias->out   bias->hidden0   

To determine which weights belongs to which connections between layers, try this

# net is our neural network from previous example
for c in [connection for connections in net.connections.values() for connection in connections]:
    print("{} -> {} => {}".format(c.inmod.name, c.outmod.name, c.params))

Anyway, I still don't know exact order of weights between layers...

这篇关于PyBrain:如何在神经网络中放置特定的权重?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆