如何从训练 lstm 单元中读取每个特征的权重 [英] how to read the weight of each feature from trainning lstm cell

查看:100
本文介绍了如何从训练 lstm 单元中读取每个特征的权重的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 TF 1.1.0 上使用了几个时间序列作为特征(3 个特征作为输入)到 LSTM 模型(1 个回归输出).主函数启动一个会话如下:

I used several time series as features (3 features as input) into LSTM model (1 output on regression) on TF 1.1.0. The main function started a session as below:

model = SKCompat(learn.Estimator(model_fn=lstm_model, 
              model_dir=LOG_DIR,
              params={'learning_rate': Learning_rate}))
model.fit(trainX, trainY, steps=steps)

和lstm_model函数主要如下:

and the lstm_model function as below mainly:

lstm_cell = tf.contrib.rnn.LSTMCell(hidden, state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper(cell=lstm_cell, output_keep_prob=0.1)
(output, state) = tf.nn.dynamic_rnn(cell=lstm_cell, inputs=features, dtype=tf.float32)

在训练并保存模型(由默认 tf 函数本身自动保存)后,我可以通过主函数中的import_meta_graph"和restore"读取 LSTM 单元的权重.权重看起来像一个 (131, 512) 数组.

After training and saving the model (saved automatically by the default tf function itself), I could read the weights of the LSTM cell by 'import_meta_graph' and 'restore' in the main function. The weights looks like a (131, 512) array.

问题是如何从这样的权重数组中知道每个特征的权重,即每个特征对输出的权重?

The problem is how to know the weights of each feature from such a weights array, i.e. each feature's weights to the output?

顺便说一句,如果你使用默认的 tf 函数,我可以在训练时保存自定义模型吗?怎么做?非常感谢.

BTW, if you use the default tf function, could I save the customized model when trainning? How to do? Thanks a lot.

推荐答案

我认为 weights[131, 512] 可以这样解释:

I think weights[131, 512] could be explain like this:

因为128个隐藏层,3个特征

Because of 128 hidden layers, 3 features

  1. D-131 变量是 3 个特征的权重,128 个隐藏层(仍然不确定).
  2. D-512 变量是每个隐藏层中输入门、遗忘门、单元状态和输出门的权重,4*128 = 512.我说得对吗?

这篇关于如何从训练 lstm 单元中读取每个特征的权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆