铰链损耗函数梯度w.r.t.输入预测 [英] Hinge loss function gradient w.r.t. input prediction
问题描述
对于一项作业,我必须同时实现铰链损耗及其偏导数计算功能.我得到了铰链损失函数本身,但是我很难理解如何计算其偏导数w.r.t.预测输入.我尝试了不同的方法,但是没有一个起作用.
For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.
任何帮助,提示和建议将不胜感激!
Any help, hints, suggestions will be much appreciated!
这里是铰链损失函数本身的解析表达式:
Here is the analytical expression for Hinge loss function itself:
这是我的Hinge损失函数的实现:
And here is my Hinge loss function implementation:
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)
return output
现在我需要计算这个梯度:
Now I need to calculate this gradient:
这是我为计算铰链损耗梯度而尝试的方法:
This is what I tried for the Hinge loss gradient calculation:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)
# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)
if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred
return grad_input
推荐答案
我已经设法通过使用np.where()函数来解决此问题.这是代码:
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
对于y * y≤的所有情况,梯度基本上等于-y/N. 1,否则为0.
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
这篇关于铰链损耗函数梯度w.r.t.输入预测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!