神经网络/机器学习内存存储 [英] Neural Network / Machine Learning memory storage

查看:142
本文介绍了神经网络/机器学习内存存储的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试建立一个用于信息提取的神经网络,除了似乎让我感到困惑的一个之外,我对流利的神经网络的(基本)概念非常熟练.这可能很明显,但是我似乎找不到有关它的信息.

I am currently trying to set up an Neural Network for information extraction and I am pretty fluent with the (basic) concepts of Neural Networks, except for one which seem to puzzle me. It is probably pretty obvious but I can't seem to found information about it.

神经网络在哪里/如何存储它们的内存? (/机器学习)

在线上有很多关于神经网络和机器学习的信息,但是它们似乎都跳过了内存存储.例如,重新启动程序后,在哪里找到可以继续学习/预测的内存?在线上的许多示例似乎都不能保留"内存,但我无法想象这对于实际/大规模部署是安全的".

There is quite a bit of information available online about Neural Networks and Machine Learning but they all seem to skip over memory storage. For example after restarting the program, where does it find its memory to continue learning/predicting? Many examples online don't seem to 'retain' memory but I can't imagine this being 'safe' for real/big-scale deployment.

我很难说出我的问题,所以如果需要详细说明,请告诉我. 谢谢,

I have a difficult time wording my question, so please let me know if I need to elaborate a bit more. Thanks,

-跟进下面的答案

每个神经网络都将具有与之关联的边缘权重. 这些边缘权重会在训练的过程中进行调整. 神经网络.

Every Neural Network will have edge weights associated with them. These edge weights are adjusted during the training session of a Neural Network.

这正是我在苦苦挣扎的地方,我应该/应该如何看待这种二级记忆? 这像RAM吗?这之所以看起来不合逻辑.我问的原因是因为我没有在网上遇到定义或指定此辅助内存的示例(例如,在更具体的内容中,例如XML文件,甚至可能是一个巨大的数组). /p>

This is exactly where I am struggling, how do/should I vision this secondary memory? Is this like RAM? that doesn't seem logical.. The reason I ask because I haven't encountered an example online that defines or specifies this secondary memory (for example in something more concrete such as an XML file, or maybe even a huge array).

推荐答案

内存存储是特定于实现的,本身并不是算法的一部分.考虑需要存储什么而不是如何存储可能更有用.

Memory storage is implementation-specific and not part of the algorithm per se. It is probably more useful to think about what you need to store rather than how to store it.

考虑一个三层的多层感知器(完全连接),在输入,隐藏和输出层分别具有3、8和5个节点(在此讨论中,我们可以忽略偏置输入).然后,通过两个矩阵来表示所需权重的一种合理(有效)方式:一个3x8矩阵,用于输入和隐藏层之间的权重,一个8x5矩阵,用于隐藏和输出层之间的权重.

Consider a 3-layer multi-layer perceptron (fully connected) that has 3, 8, and 5 nodes in the input, hidden, and output layers, respectively (for this discussion, we can ignore bias inputs). Then a reasonable (and efficient) way to represent the needed weights is by two matrices: a 3x8 matrix for weights between the input and hidden layers and an 8x5 matrix for the weights between the hidden and output layers.

对于此示例,您需要存储权重和网络形状(每层节点数).您可以通过多种方式存储此信息.它可以在XML文件或用户定义的二进制文件中.如果您使用的是python,则可以将两个矩阵都保存到二进制.npy文件中,并在文件名中编码网络形状.如果实施了该算法,则取决于您如何存储持久性数据.另一方面,如果您使用的是现有的机器学习软件包,则它可能具有自己的I/O功能,用于存储和加载经过训练的网络.

For this example, you need to store the weights and the network shape (number of nodes per layer). There are many ways you could store this information. It could be in an XML file or a user-defined binary file. If you were using python, you could save both matrices to a binary .npy file and encode the network shape in the file name. If you implemented the algorithm, it is up to you how to store the persistent data. If, on the other hand, you are using an existing machine learning software package, it probably has its own I/O functions for storing and loading a trained network.

这篇关于神经网络/机器学习内存存储的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆