警告:tensorflow:层 my_model 正在将输入张量从 dtype float64 转换为层的 dtype 为 float32,这是 TensorFlow 2 中的新行为 [英] WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2

查看:34
本文介绍了警告:tensorflow:层 my_model 正在将输入张量从 dtype float64 转换为层的 dtype 为 float32,这是 TensorFlow 2 中的新行为的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的 Tensorflow 神经网络开始训练之前,打印出以下警告:

<块引用>

警告:tensorflow:Layer my_model 正在从dtype float64 到 float32 层的 dtype,这是新行为在 TensorFlow 2. 该层有 dtype float32 因为它是 dtype默认为 floatx.如果你打算在 float32 中运行这个层,你可以放心地忽略此警告.

<块引用>

如有疑问,此警告很可能仅当您将 TensorFlow 1.X 模型移植到 TensorFlow 时才会出现问题2. 要将所有层默认更改为具有 dtype float64,请调用 tf.keras.backend.set_floatx('float64').

<块引用>

要改变这一层,将 dtype='float64' 传递给层构造函数.如果你是作者在这一层,你可以通过传递 autocast=False 来禁用自动投射到基础层构造函数.

现在,根据错误消息,我可以通过将后端设置为 'float64' 来消除此错误消息.但是,我想深入了解并手动设置正确的 dtypes.

完整代码:

将tensorflow导入为tf从 tensorflow.keras.layers 导入密集,连接从 tensorflow.keras 导入模型从 sklearn.datasets 导入 load_iris虹膜,目标 = load_iris(return_X_y=True)X = 虹膜 [:, :3]y = 虹膜[:, 3]ds = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(25).batch(8)类 MyModel(模型):def __init__(self):super(MyModel, self).__init__()self.d0 = Dense(16, activation='relu')self.d1 = Dense(32, activation='relu')self.d2 = Dense(1, activation='linear')定义调用(自我,x):x = self.d0(x)x = self.d1(x)x = self.d2(x)返回 x模型 = MyModel()loss_object = tf.keras.losses.MeanSquaredError()优化器 = tf.keras.optimizers.Adam(learning_rate=5e-4)损失 = tf.keras.metrics.Mean(name='loss')错误 = tf.keras.metrics.MeanSquaredError()@tf.functiondef train_step(输入,目标):使用 tf.GradientTape() 作为磁带:预测 = 模型(输入)run_loss = loss_object(目标,预测)梯度 = tape.gradient(run_loss,model.trainable_variables)optimizer.apply_gradients(zip(gradients,model.trainable_variables))损失(运行损失)错误(预测,目标)对于范围内的时代(10):对于数据,ds 中的标签:train_step(数据,标签)模板 = 'Epoch {:>2},损失:{:>7.4f},MSE:{:>6.2f}'打印(模板.格式(纪元+1,损失.结果(),错误结果()*100))# 重置下一个时期的指标loss.reset_states()error.reset_states()

解决方案

tl;dr 为避免这种情况,请将您的输入转换为 float32

X = tf.cast(iris[:, :3], tf.float32)y = tf.cast(iris[:, 3], tf.float32)

或使用 numpy:

X = np.array(iris[:, :3], dtype=np.float32)y = np.array(iris[:, 3], dtype=np.float32)

说明

默认情况下,Tensorflow 使用 floatx,默认为 float32,这是深度学习的标准.您可以验证这一点:

 将 tensorflow 导入为 tftf.keras.backend.floatx()

输出[3]:'float32'

您提供的输入(Iris 数据集)是 dtype float64,因此 Tensorflow 的默认权重 dtype 与输入之间存在不匹配.Tensorflow 不喜欢那样,因为转换(更改 dtype)成本很高.Tensorflow 在操作不同 dtype 的张量时通常会抛出错误(例如,比较 float32 logits 和 float64 标签).

新行为"它在谈论:

<块引用>

层 my_model_1 将输入张量从 dtype float64 转换为层的 dtype 为 float32,这是 TensorFlow 2 中的新行为

是它会自动将输入的数据类型转换为 float32.在这种情况下,Tensorflow 1.X 可能会抛出异常,尽管我不能说我曾经使用过它.

Before my Tensorflow neural network starts training, the following warning prints out:

WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx. If you intended to run this layer in float32, you can safely ignore this warning.

If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2. To change all layers to have dtype float64 by default, call tf.keras.backend.set_floatx('float64').

To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Now, based on the error message, I am able to silence this error message by setting the backend to 'float64'. But, I would like to get to the bottom of this and set the right dtypes manually.

Full code:

import tensorflow as tf
from tensorflow.keras.layers import Dense, Concatenate
from tensorflow.keras import Model
from sklearn.datasets import load_iris
iris, target = load_iris(return_X_y=True)

X = iris[:, :3]
y = iris[:, 3]

ds = tf.data.Dataset.from_tensor_slices((X, y)).shuffle(25).batch(8)

class MyModel(Model):
  def __init__(self):
    super(MyModel, self).__init__()
    self.d0 = Dense(16, activation='relu')
    self.d1 = Dense(32, activation='relu')
    self.d2 = Dense(1, activation='linear')

  def call(self, x):
    x = self.d0(x)
    x = self.d1(x)
    x = self.d2(x)
    return x

model = MyModel()

loss_object = tf.keras.losses.MeanSquaredError()

optimizer = tf.keras.optimizers.Adam(learning_rate=5e-4)

loss = tf.keras.metrics.Mean(name='loss')
error = tf.keras.metrics.MeanSquaredError()

@tf.function
def train_step(inputs, targets):
    with tf.GradientTape() as tape:
        predictions = model(inputs)
        run_loss = loss_object(targets, predictions)
    gradients = tape.gradient(run_loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    loss(run_loss)
    error(predictions, targets)

for epoch in range(10):
  for data, labels in ds:
    train_step(data, labels)

  template = 'Epoch {:>2}, Loss: {:>7.4f}, MSE: {:>6.2f}'
  print(template.format(epoch+1,
                        loss.result(),
                        error.result()*100))
  # Reset the metrics for the next epoch
  loss.reset_states()
  error.reset_states()

解决方案

tl;dr to avoid this, cast your input to float32

X = tf.cast(iris[:, :3], tf.float32) 
y = tf.cast(iris[:, 3], tf.float32)

or with numpy:

X = np.array(iris[:, :3], dtype=np.float32)
y = np.array(iris[:, 3], dtype=np.float32)

Explanation

By default, Tensorflow uses floatx, which defaults to float32, which is standard for deep learning. You can verify this:

import tensorflow as tf
tf.keras.backend.floatx()

Out[3]: 'float32'

The input you provided (the Iris dataset), is of dtype float64, so there is a mismatch between Tensorflow's default dtype for weights, and the input. Tensorflow doesn't like that, because casting (changing the dtype) is costly. Tensorflow will generally throw an error when manipulating tensors of different dtypes (e.g., comparing float32 logits and float64 labels).

The "new behavior" it's talking about:

Layer my_model_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2

Is that it will automatically cast the input dtype to float32. Tensorflow 1.X probably threw an exception in this situation, although I can't say I've ever used it.

这篇关于警告:tensorflow:层 my_model 正在将输入张量从 dtype float64 转换为层的 dtype 为 float32,这是 TensorFlow 2 中的新行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆