警告:tensorflow:层my_model正在将输入张量从dtype float64强制转换为float32层的dtype,这是TensorFlow 2中的新行为 [英] WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2

查看:1278
本文介绍了警告:tensorflow:层my_model正在将输入张量从dtype float64强制转换为float32层的dtype,这是TensorFlow 2中的新行为的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的Tensorflow神经网络开始训练之前,会打印出以下警告:


警告:tensorflow:层my_model正在从$ b $转换输入张量b dtype float64到该层的float32的dtype,这是TensorFlow 2中的新行为
。该层具有dtype float32,因为它的dtype
默认为floatx。如果您打算在float32中运行此层,则
可以放心地忽略此警告。




如果有疑问,此警告可能只是
的问题,如果要将TensorFlow 1.X模型移植到TensorFlow
2的话。默认情况下,要将所有层更改为dtype float64,请调用 tf。 keras.backend.set_floatx('float64')




仅更改此层,
将dtype ='float64'传递给图层构造函数。如果您是该层的作者
,则可以通过将autocast = False
传递给基本Layer构造函数来禁用自动广播。


现在,根据错误消息,我可以通过将后备选项设置为'float64'来使此错误消息静音。但是,我想深入了解这个问题并手动设置正确的 dtypes


完整代码:

 从tensorflow.keras.layers导入tf 
tensorflow从tensorflow.keras导入密集,连接
来自sklearn.datasets的模型
导入load_iris
虹膜,目标= load_iris(return_X_y = True)

X =虹膜[:,:3]
y =虹膜[: ,3]

ds = tf.data.Dataset.from_tensor_slices((X,y))。shuffle(25).batch(8)

class MyModel(Model) :
def __init __(self):
super(MyModel,self).__ init __()
self.d0 = Dense(16,activation ='relu')
self.d1 =密集(32,激活='relu')
self.d2 =密集(1,激活='线性')

def调用(self,x):
x = self.d0(x)
x = self.d1(x)
x = self.d2(x)
return x

model = MyModel()

loss_object = tf.keras.losses.MeanSquaredError()

优化程序= tf。 keras.optimizers.Adam(learning_rate = 5e-4)

损失= tf.keras.metrics.Mean(name ='loss')
错误= tf.keras.metrics.MeanSquaredError( )

@ tf.function
def train_step(输入,目标):以tf.GradientTape()作为磁带的

预测=模型(输入)
run_loss =损失对象(目标,预测)
梯度= tape.gradient(run_loss,model.trainable_variables)
Optimizer.apply_gradients(zip(gradients,model.trainable_variables))
损失(run_loss )
错误(预测,目标)

范围内的时期(10):
表示数据,ds中的标签:
train_step(data,标签)

template ='Epoch {:> 2},损失:{:> 7.4f},MSE:{:> 6.2f}'
print(template.format(epoch + 1 ,
loss.result(),
error.result()* 100))
#重置下一个时期的指标
loss.reset_states()
error .reset_states()


解决方案

tl; dr 为避免这种情况,请将输入内容转换为 float32

  X = tf.cast(iris [:,:3],tf.float32)
y = tf.cast(iris [:, 3] ,tf.float32)

或带有 numpy

  X = np.array(iris [:,:3],dtype = np.float32)
y = np.array(iris [:, 3],dtype = np.float32)

说明


默认情况下,Tensorflow使用 floatx ,默认为 float32 ,这是深度学习的标准。您可以验证这一点:

 导入张量流为tf 
tf.keras.backend.floatx()


  Out [3]: float32 

您提供的输入(虹膜数据集)的dtype为 float64 ,因此Tensorflow的默认权重dtype不匹配,并且输入。 Tensorflow不喜欢这样,因为强制转换(更改dtype)的成本很高。在处理不同dtypes的张量时,Tensorflow通常会引发错误(例如,比较 float32 logits和 float64 标签)。 / p>

新行为它正在谈论:


图层my_model_1正在将输入张量从dtype float64转换为该层的dtype float32,这是TensorFlow 2中的新行为


是它将自动将输入dtype强制转换为 float32 。 Tensorflow 1.X在这种情况下可能引发了异常,尽管我不能说我曾经使用过它。


Before my Tensorflow neural network starts training, the following warning prints out:

WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning.

If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call tf.keras.backend.set_floatx('float64').

To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Now, based on the error message, I am able to silence this error message by setting the backed to 'float64'. But, I would like to get to the bottom of this and set the right dtypes manually.

Full code:

import tensorflow as tf
from tensorflow.keras.layers import Dense, Concatenate
from tensorflow.keras import Model
from sklearn.datasets import load_iris
iris, target = load_iris(return_X_y=True)

X = iris[:, :3]
y = iris[:, 3]

ds = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(25).batch(8)

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.d0 = Dense(16, activation='relu')
    self.d1 = Dense(32, activation='relu')
    self.d2 = Dense(1, activation='linear')

  def call(self, x):
    x = self.d0(x)
    x = self.d1(x)
    x = self.d2(x)
    return x

model = MyModel()

loss_object = tf.keras.losses.MeanSquaredError()

optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4)

loss = tf.keras.metrics.Mean(name='loss')
error = tf.keras.metrics.MeanSquaredError()

@tf.function
def train_step(inputs, targets):
    with tf.GradientTape() as tape:
        predictions = model(inputs)
        run_loss = loss_object(targets, predictions)
    gradients = tape.gradient(run_loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    loss(run_loss)
    error(predictions, targets)

for epoch in range(10):
  for data, labels in ds:
    train_step(data, labels)

  template = 'Epoch {:>2}, Loss: {:>7.4f}, MSE: {:>6.2f}'
  print(template.format(epoch+1,
                        loss.result(),
                        error.result()*100))
  # Reset the metrics for the next epoch
  loss.reset_states()
  error.reset_states()

解决方案

tl;dr to avoid this, cast your input to float32

X = tf.cast(iris[:, :3], tf.float32) 
y = tf.cast(iris[:, 3], tf.float32)

or with numpy:

X = np.array(iris[:, :3], dtype=np.float32)
y = np.array(iris[:, 3], dtype=np.float32)

Explanation

By default, Tensorflow uses floatx, which defaults to float32, which is standard for deep learning. You can verify this:

import tensorflow as tf
tf.keras.backend.floatx()

Out[3]: 'float32'

The input you provided (the Iris dataset), is of dtype float64, so there is a mismatch between Tensorflow's default dtype for weights, and the input. Tensorflow doesn't like that, because casting (changing the dtype) is costly. Tensorflow will generally throw an error when manipulating tensors of different dtypes (e.g., comparing float32 logits and float64 labels).

The "new behavior" it's talking about:

Layer my_model_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2

Is that it will automatically cast the input dtype to float32. Tensorflow 1.X probably threw an exception in this situation, although I can't say I've ever used it.

这篇关于警告:tensorflow:层my_model正在将输入张量从dtype float64强制转换为float32层的dtype,这是TensorFlow 2中的新行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆