如何在 tensorflow 中表达这个自定义损失函数? [英] How can I express this custom loss function in tensorflow?

查看:17
本文介绍了如何在 tensorflow 中表达这个自定义损失函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个满足我需要的损失函数,但只在 PyTorch 中.我需要将它实现到我的 TensorFlow 代码中,但虽然其中的大部分内容都可以轻松地翻译"我被一条特定的线路卡住了:

I've got a loss function that fulfills my needs, but is only in PyTorch. I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line:

y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max  # to be "1" after sigmoid

您可以在下面看到整个代码,除了那一行之外,它确实非常简单:

You can see the whole code in following and it is indeed pretty straight forward except for that line:

def get_loss(y_hat, y):
 # No loss on diagonal
 B, N, _ = y_hat.shape
 y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max  # to be "1" after sigmoid

 # calc loss
 loss = F.binary_cross_entropy_with_logits(y_hat, y)  # cross entropy

 y_hat = torch.sigmoid(y_hat)
 tp = (y_hat * y).sum(dim=(1, 2))
 fn = ((1. - y_hat) * y).sum(dim=(1, 2))
 fp = (y_hat * (1. - y)).sum(dim=(1, 2))
 loss = loss - ((2 * tp) / (2 * tp + fp + fn + 1e-10)).sum()  # fscore

return loss

到目前为止,我想出了以下内容:

So far I came up with following:

def get_loss(y_hat, y):
 loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y)  # cross entropy (but no logits)


 y_hat = tf.math.sigmoid(y_hat)

 tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2])
 fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2])
 fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2])
 loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10)))  # fscore

return loss

所以我的问题归结为:

  • torch.finfo() 做了什么以及如何在 TensorFlow 中表达它?
  • y_hat.dtype 是否只返回数据类型?
  • What does torch.finfo() do and how to express it in TensorFlow?
  • Does y_hat.dtype just return the data type?

推荐答案

1.torch.finfo() 做了什么以及如何在 TensorFlow 中表达它?

.finfo() 提供了一种获得浮点类型的机器限制的巧妙方法.此函数在 NumpyTorch 以及 Torlowerreferr实验性.

1. What does torch.finfo() do and how to express it in TensorFlow?

.finfo() provides a neat way to get machine limits for floating-point types. This function is available in Numpy, Torch as well as Tensorflow experimental.

.finfo().max 返回可表示为该 dtype 的最大可能数.

.finfo().max returns the largest possible number representable as that dtype.

注意:还有一个用于整数类型的 .iinfo().

NOTE: There is also a .iinfo() for integer types.

这里有几个 finfoiinfo 的例子.

Here are a few examples of finfo and iinfo in action.

print('FLOATS')
print('float16',torch.finfo(torch.float16).max)
print('float32',torch.finfo(torch.float32).max)
print('float64',torch.finfo(torch.float64).max)
print('')
print('INTEGERS')
print('int16',torch.iinfo(torch.int16).max)
print('int32',torch.iinfo(torch.int32).max)
print('int64',torch.iinfo(torch.int64).max)

FLOATS
float16 65504.0
float32 3.4028234663852886e+38
float64 1.7976931348623157e+308

INTEGERS
int16 32767
int32 2147483647
int64 9223372036854775807

如果你想在tensorflow中实现这个,你可以使用tf.experimental.numpy.finfo来解决这个问题.

If you want to implement this in tensorflow, you can use tf.experimental.numpy.finfo to solve this.

print(tf.experimental.numpy.finfo(tf.float32))
print('Max ->',tf.experimental.numpy.finfo(tf.float32).max)  #<---- THIS IS WHAT YOU WANT

Machine parameters for float32
---------------------------------------------------------------
precision =   6   resolution = 1.0000000e-06
machep =    -23   eps =        1.1920929e-07
negep =     -24   epsneg =     5.9604645e-08
minexp =   -126   tiny =       1.1754944e-38
maxexp =    128   max =        3.4028235e+38
nexp =        8   min =        -max
---------------------------------------------------------------

Max -> 3.4028235e+38

2.y_hat.dtype 是否只返回数据类型?

是的.

在 Torch 中,它会返回 torch.float32 或类似的东西.在 Tensorflow 中,它会返回 tf.float32 或类似的东西.

In torch, it would return torch.float32 or something like that. In Tensorflow it would return tf.float32 or something like that.

这篇关于如何在 tensorflow 中表达这个自定义损失函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆