为什么在深度学习中使用对数概率? [英] Why do we use log probability in deep learning?
问题描述
在阅读论文《使用神经网络进行序列学习》时,我感到很好奇.实际上,不仅本文而且许多其他论文都使用对数概率,这是否有原因?请检查附件中的照片.
I got curious while reading the paper 'Sequence to Sequence Learning with Neural Networks'. In fact, not only this paper but also many other papers use log probabilities, is there a reason for that? Please check the attached photo.
推荐答案
两个原因-
-
理论-两个同时发生的独立事件A和B的概率由P(A).P(B)给出.如果我们使用对数,则很容易将其映射为总和,即log(P(A))+ log(P(B)).因此,将神经元触发事件"视为线性函数更容易.
Theoretical - Probabilities of two independent events A and B co-occurring together is given by P(A).P(B). This easily gets mapped to a sum if we use log, i.e. log(P(A)) + log(P(B)). It is thus easier to address the neuron firing 'events' as a linear function.
实用-概率值在[0,1]中.因此,将两个或多个这样的小数相乘可能很容易导致浮点精度算术下溢(例如,考虑乘以0.0001 * 0.00001).一种实用的解决方案是使用日志来消除下溢.
Practical - The probability values are in [0, 1]. Hence multiplying two or more such small numbers could easily lead to an underflow in a floating point precision arithmetic (e.g. consider multiplying 0.0001*0.00001). A practical solution is to use the logs to get rid of the underflow.
这篇关于为什么在深度学习中使用对数概率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!