如何在python keras中的张量上计算浮点数的熵 [英] how to calculate entropy on float numbers over a tensor in python keras
问题描述
我一直在为此苦苦挣扎,无法使它正常工作.希望有人可以帮助我.
I have been struggling on this and could not get it to work. hope someone can help me with this.
我想在tensor
的每一行上计算entropy
. 因为我的数据是浮点数而不是整数,所以我认为我需要使用bin_histogram.
I want to calculate the entropy
on each row of the tensor
. Because my data are float numbers not integers I think I need to use bin_histogram.
例如我的数据样本为tensor =[[0.2, -0.1, 1],[2.09,-1.4,0.9]]
仅供参考我的模型是seq2seq
并用keras
编写,带有tensorflow后端.
Just for information My model is seq2seq
and written in keras
with tensorflow backend.
到目前为止,这是我的代码:我需要更正rev_entropy
This is my code so far: I need to correct rev_entropy
class entropy_measure(Layer):
def __init__(self, beta,batch, **kwargs):
self.beta = beta
self.batch = batch
self.uses_learning_phase = True
self.supports_masking = True
super(entropy_measure, self).__init__(**kwargs)
def call(self, x):
return K.in_train_phase(self.rev_entropy(x, self.beta,self.batch), x)
def get_config(self):
config = {'beta': self.beta}
base_config = super(entropy_measure, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def rev_entropy(self, x, beta,batch):
for i in x:
i = pd.Series(i)
p_data = i.value_counts() # counts occurrence of each value
entropy = entropy(p_data) # get entropy from counts
rev = 1/(1+entropy)
return rev
new_f_w_t = x * (rev.reshape(rev.shape[0], 1))*beta
return new_f_w_t
非常感谢任何输入:)
推荐答案
对于这个问题,您似乎有一系列问题.我会在这里解决的.
您以entropy "nofollow noreferrer"> scipy.stats.entropy
根据您的代码:
You calculate entropy
in the following form of scipy.stats.entropy
according to your code:
scipy.stats.entropy(pk,qk = None,base = None)
scipy.stats.entropy(pk, qk=None, base=None)
计算给定概率值的分布的熵.
Calculate the entropy of a distribution for given probability values.
如果仅给出概率pk,则熵的计算公式为 S = -sum(pk * log(pk),axis = 0).
If only probabilities pk are given, the entropy is calculated as S = -sum(pk * log(pk), axis=0).
Tensorflow不提供直接API来计算张量的每一行上的entropy
.我们需要做的是实现上述公式.
Tensorflow does not provide a direct API to calculate entropy
on each row of the tensor. What we need to do is to implement the above formula.
import tensorflow as tf
import pandas as pd
from scipy.stats import entropy
a = [1.1,2.2,3.3,4.4,2.2,3.3]
res = entropy(pd.value_counts(a))
_, _, count = tf.unique_with_counts(tf.constant(a))
# [1 2 2 1]
prob = count / tf.reduce_sum(count)
# [0.16666667 0.33333333 0.33333333 0.16666667]
tf_res = -tf.reduce_sum(prob * tf.log(prob))
with tf.Session() as sess:
print('scipy version: \n',res)
print('tensorflow version: \n',sess.run(tf_res))
scipy version:
1.329661348854758
tensorflow version:
1.3296613488547582
然后,我们需要根据上述代码在自定义层中定义一个函数并通过tf.map_fn
实现for loop
.
Then we need to define a function and achieve for loop
through tf.map_fn
in your custom layer according to above code.
def rev_entropy(self, x, beta,batch):
def row_entropy(row):
_, _, count = tf.unique_with_counts(row)
prob = count / tf.reduce_sum(count)
return -tf.reduce_sum(prob * tf.log(prob))
value_ranges = [-10.0, 100.0]
nbins = 50
new_f_w_t = tf.histogram_fixed_width_bins(x, value_ranges, nbins)
rev = tf.map_fn(row_entropy, new_f_w_t,dtype=tf.float32)
new_f_w_t = x * 1/(1+rev)*beta
return new_f_w_t
请注意,由于entropy
是根据统计概率值计算的,因此隐藏层不会产生无法向后传播的渐变.也许您需要重新考虑隐藏的层结构.
Notes that the hidden layer will not produce a gradient that cannot propagate backwards since entropy
is calculated on the basis of statistical probabilistic values. Maybe you need to rethink your hidden layer structure.
这篇关于如何在python keras中的张量上计算浮点数的熵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!