我的交叉熵函数实现有什么问题? [英] What is the problem with my implementation of the cross-entropy function?

查看:21
本文介绍了我的交叉熵函数实现有什么问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习神经网络,我想在 python 中编写一个函数 cross_entropy.它被定义为

I am learning the neural network and I want to write a function cross_entropy in python. Where it is defined as

其中N为样本数,k为类数,log为自然数对数,如果样本ij类中,t_i,j为1,否则0p_i,j 是样本i 属于j 类的预测概率.为了避免对数的数值问题,将预测剪裁到 [10^{−12}, 1 − 10^{−12}] 范围.

where N is the number of samples, k is the number of classes, log is the natural logarithm, t_i,j is 1 if sample i is in class j and 0 otherwise, and p_i,j is the predicted probability that sample i is in class j. To avoid numerical issues with logarithm, clip the predictions to [10^{−12}, 1 − 10^{−12}] range.

根据上面的描述,我通过将预测裁剪到 [epsilon, 1 − epsilon] 范围,然后根据上面的公式计算 cross_entropy 来写下代码.

According to the above description, I wrote down the codes by clipping the predictions to [epsilon, 1 − epsilon] range, then computing the cross_entropy based on the above formula.

def cross_entropy(predictions, targets, epsilon=1e-12):
    """
    Computes cross entropy between targets (encoded as one-hot vectors)
    and predictions. 
    Input: predictions (N, k) ndarray
           targets (N, k) ndarray        
    Returns: scalar
    """
    predictions = np.clip(predictions, epsilon, 1. - epsilon)
    ce = - np.mean(np.log(predictions) * targets) 
    return ce

下面的代码将用于检查函数cross_entropy是否正确.

The following code will be used to check if the function cross_entropy are correct.

predictions = np.array([[0.25,0.25,0.25,0.25],
                        [0.01,0.01,0.01,0.96]])
targets = np.array([[0,0,0,1],
                  [0,0,0,1]])
ans = 0.71355817782  #Correct answer
x = cross_entropy(predictions, targets)
print(np.isclose(x,ans))

以上代码的输出是 False,也就是说我定义函数 cross_entropy 的代码不正确.然后我打印 cross_entropy(predictions, targets) 的结果.它给出了 0.178389544455,正确的结果应该是 ans = 0.71355817782.有人可以帮我检查一下我的代码有什么问题吗?

The output of the above codes is False, that to say my codes for defining the function cross_entropy is not correct. Then I print the result of cross_entropy(predictions, targets). It gave 0.178389544455 and the correct result should be ans = 0.71355817782. Could anybody help me to check what is the problem with my codes?

推荐答案

您并没有那么遥远,但请记住,您正在取 N 个总和的平均值,其中 N = 2(在本例中).所以你的代码可以读:

You're not that far off at all, but remember you are taking the average value of N sums, where N = 2 (in this case). So your code could read:

def cross_entropy(predictions, targets, epsilon=1e-12):
    """
    Computes cross entropy between targets (encoded as one-hot vectors)
    and predictions. 
    Input: predictions (N, k) ndarray
           targets (N, k) ndarray        
    Returns: scalar
    """
    predictions = np.clip(predictions, epsilon, 1. - epsilon)
    N = predictions.shape[0]
    ce = -np.sum(targets*np.log(predictions+1e-9))/N
    return ce

predictions = np.array([[0.25,0.25,0.25,0.25],
                        [0.01,0.01,0.01,0.96]])
targets = np.array([[0,0,0,1],
                   [0,0,0,1]])
ans = 0.71355817782  #Correct answer
x = cross_entropy(predictions, targets)
print(np.isclose(x,ans))

这里,我认为如果你坚持使用 np.sum() 会更清楚一些.此外,我在 np.log() 中添加了 1e-9 以避免在您的计算中出现 log(0) 的可能性.希望这会有所帮助!

Here, I think it's a little clearer if you stick with np.sum(). Also, I added 1e-9 into the np.log() to avoid the possibility of having a log(0) in your computation. Hope this helps!

注意:根据@Peter 的评论,如果您的 epsilon 值大于 01e-9 的偏移量确实是多余的.

NOTE: As per @Peter's comment, the offset of 1e-9 is indeed redundant if your epsilon value is greater than 0.

这篇关于我的交叉熵函数实现有什么问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆