如何存储神经网络知识数据? [英] How to store neural network knowledge data?

查看:341
本文介绍了如何存储神经网络知识数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是那个地区的新手,所以这个问题可能看起来很奇怪。然而,在询问之前,我已经阅读了大量关于机器学习的关键点以及神经网络的作用部分的介绍性文章。包括一个非常有用的什么是机器学习。基本上我得到了它 - 一个受过教育的NN(纠正我,如果它是错的):

I am new to that area, so the question may seem strange. However before asking I've read bunch of introductory articles about what are the key points about in machine learning and what are the acting parts of neural networks. Including very useful that one What is machine learning. Basically as I got it - an educated NN is (correct me if it's wrong):


  1. 神经元之间的连接集(可能是自我连接的) ,可能有门等。)

  2. 在每个连接上形成激活概率。

两者在培训期间对事情进行调整,以尽可能接近预期的产出。然后,我们用受过教育的NN做什么 - 我们将测试数据子集加载到其中并检查它的执行情况。但是如果我们对测试结果感到满意并且我们希望存储教育结果而不是在数据集获得新值时再次运行训练会发生什么。

Both things are adjusted during the training to fit expected output as close as possible. Then, what we do with an educated NN - we load the test subset of data into it and check how good it performs. But what happens if we're happy with the test results and we want to store the education results and not run training again later when dataset get new values.

所以我的问题是 - 教育知识存储在RAM以外的某个地方吗?可以被转储(以某种方式考虑对象序列化),这样您就不需要使用明天或以后获得的数据来教育您的NN。

So my question is - is that education knowledge is stored somewhere except RAM? can be dumped (think of object serialisation in a way) so that you don't need to educate your NN with data you get tomorrow or later.

现在我正在尝试使用 synaptic.js 使用我的数据集进行简单演示,但我无法发现这种节约教育的概念在项目的维基。
那个库只是一个例子,如果你引用一些python lib会很好!

Now I am trying to make simple demo with my dataset using synaptic.js but I could not spot that kind of concept of saving education in project's wiki. That library is just an example, if you reference some python lib would be good to!

推荐答案

我会假设在我的回答中,您正在使用简单的多层感知器(MLP),尽管我的回答也适用于其他网络。

I will assume in my answer that you are working with a simple multi-layer perceptron (MLP), although my answer is applicable to other networks too.

训练MLP的目的是找到正确的突触权重,以最大限度地减少错误。网络输出。

The purpose of 'training' an MLP is to find the correct synaptic weights that minimise the error on the network output.

当神经元连接到另一个神经元时,其输入被赋予权重。神经元执行一项功能,例如所有输入的加权和,然后输出结果。

When a neuron is connected to another neuron, its input is given a weight. The neuron performs a function, such as the weighted sum of all inputs, and then outputs the result.

一旦你训练了你的网络并找到了这些权重,你就可以使用验证集验证结果。

Once you have trained your network, and found these weights, you can verify the results using a validation set.

如果您对网络运行良好感到满意,只需记录您应用的权重即可每个连接。您可以将这些权重存储在任何位置(以及网络结构的描述),然后再检索它们。每次您想要使用它时都无需重新训练网络。

If you are happy that your network is performing well, you simply record the weights that you applied to each connection. You can store these weights wherever you like (along with a description of the network structure) and then retrieve them later. There is no need to re-train the network every time you would like to use it.

希望这会有所帮助。

这篇关于如何存储神经网络知识数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆