如何在Tensorflow中表达此自定义损失函数? [英] How can I express this custom loss function in tensorflow?

查看:93
本文介绍了如何在Tensorflow中表达此自定义损失函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个满足我需要的损失函数,但仅在PyTorch中可用.我需要将其实施到我的TensorFlow代码中,但是尽管其中的大多数可以平凡地翻译"为我被困在特定的行中:

I've got a loss function that fulfills my needs, but is only in PyTorch. I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line:

y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max  # to be "1" after sigmoid

您可以在下面看到整个代码,除了那一行外,它确实很简单:

You can see the whole code in following and it is indeed pretty straight forward except for that line:

def get_loss(y_hat, y):
 # No loss on diagonal
 B, N, _ = y_hat.shape
 y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max  # to be "1" after sigmoid

 # calc loss
 loss = F.binary_cross_entropy_with_logits(y_hat, y)  # cross entropy

 y_hat = torch.sigmoid(y_hat)
 tp = (y_hat * y).sum(dim=(1, 2))
 fn = ((1. - y_hat) * y).sum(dim=(1, 2))
 fp = (y_hat * (1. - y)).sum(dim=(1, 2))
 loss = loss - ((2 * tp) / (2 * tp + fp + fn + 1e-10)).sum()  # fscore

return loss

到目前为止,我提出了以下建议:

So far I came up with following:

def get_loss(y_hat, y):
 loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y)  # cross entropy (but no logits)


 y_hat = tf.math.sigmoid(y_hat)

 tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2])
 fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2])
 fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2])
 loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10)))  # fscore

return loss

所以我的问题归结为:

  • torch.finfo()的作用以及如何在TensorFlow中表达它?
  • y_hat.dtype是否只返回数据类型?
  • What does torch.finfo() do and how to express it in TensorFlow?
  • Does y_hat.dtype just return the data type?

推荐答案

1. torch.finfo()会做什么,以及如何在TensorFlow中表达它?

.finfo()提供了一种巧妙的方法来获取浮点类型的机器限制. Numpy Tensorflow实验性.

1. What does torch.finfo() do and how to express it in TensorFlow?

.finfo() provides a neat way to get machine limits for floating-point types. This function is available in Numpy, Torch as well as Tensorflow experimental.

.finfo().max返回表示为dtype的最大可能数字.

.finfo().max returns the largest possible number representable as that dtype.

注意:整数类型也有一个.iinfo().

NOTE: There is also a .iinfo() for integer types.

以下是finfoiinfo实际使用的一些示例.

Here are a few examples of finfo and iinfo in action.

print('FLOATS')
print('float16',torch.finfo(torch.float16).max)
print('float32',torch.finfo(torch.float32).max)
print('float64',torch.finfo(torch.float64).max)
print('')
print('INTEGERS')
print('int16',torch.iinfo(torch.int16).max)
print('int32',torch.iinfo(torch.int32).max)
print('int64',torch.iinfo(torch.int64).max)

FLOATS
float16 65504.0
float32 3.4028234663852886e+38
float64 1.7976931348623157e+308

INTEGERS
int16 32767
int32 2147483647
int64 9223372036854775807

如果要在tensorflow中实现此功能,可以使用tf.experimental.numpy.finfo来解决.

If you want to implement this in tensorflow, you can use tf.experimental.numpy.finfo to solve this.

print(tf.experimental.numpy.finfo(tf.float32))
print('Max ->',tf.experimental.numpy.finfo(tf.float32).max)  #<---- THIS IS WHAT YOU WANT

Machine parameters for float32
---------------------------------------------------------------
precision =   6   resolution = 1.0000000e-06
machep =    -23   eps =        1.1920929e-07
negep =     -24   epsneg =     5.9604645e-08
minexp =   -126   tiny =       1.1754944e-38
maxexp =    128   max =        3.4028235e+38
nexp =        8   min =        -max
---------------------------------------------------------------

Max -> 3.4028235e+38

2. y_hat.dtype是否只返回数据类型?

是.

在割炬中,它将返回torch.float32或类似的内容.在Tensorflow中,它将返回tf.float32或类似的内容.

In torch, it would return torch.float32 or something like that. In Tensorflow it would return tf.float32 or something like that.

这篇关于如何在Tensorflow中表达此自定义损失函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆