关于“二元交叉熵"的区别和“binary_crossentropy"在 tf.keras.losses 中? [英] Difference about "BinaryCrossentropy" and "binary_crossentropy" in tf.keras.losses?

查看:91
本文介绍了关于“二元交叉熵"的区别和“binary_crossentropy"在 tf.keras.losses 中?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 TensorFlow 2.0 使用 tf.GradientTape() 训练模型,但我发现如果我使用 tf.keras.losses.BinaryCrossentropy95%,该模型的准确度为95%/code>,但如果我使用 tf.keras.losses.binary_crossentropy,则降级为 75%.所以我对这里相同指标的差异感到困惑?

I'm training a model using TensorFlow 2.0 using tf.GradientTape(), but I find that the model's accuracy is 95% if I use tf.keras.losses.BinaryCrossentropy, but degrade to 75% if I use tf.keras.losses.binary_crossentropy. So I'm confused about the difference about the same metric here?

import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers

from sklearn.model_selection import train_test_split

def read_data():
    red_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv", sep=";")
    white_wine = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv", sep=";")
    red_wine["type"] = 1
    white_wine["type"] = 0
    wines = red_wine.append(white_wine)
    return wines

def get_x_y(df):
    x = df.iloc[:, :-1].values.astype(np.float32)
    y = df.iloc[:, -1].values.astype(np.int32)
    return x, y

def build_model():
    inputs = layers.Input(shape=(12,))
    dense1 = layers.Dense(12, activation="relu", name="dense1")(inputs)
    dense2 = layers.Dense(9, activation="relu", name="dense2")(dense1)
    outputs = layers.Dense(1, activation = "sigmoid", name="outputs")(dense2)
    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    return model

def generate_dataset(df, batch_size=32, shuffle=True, train_or_test = "train"):
    x, y = get_x_y(df)
    ds = tf.data.Dataset.from_tensor_slices((x, y))
    if shuffle:
        ds = ds.shuffle(10000)
    if train_or_test == "train":
        ds = ds.batch(batch_size)
    else:
        ds = ds.batch(len(df))
    return ds

# loss_object = tf.keras.losses.binary_crossentropy
loss_object = tf.keras.losses.BinaryCrossentropy()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)

def train_step(model, optimizer, x, y):
    with tf.GradientTape() as tape:
        pred = model(x, training=True)
        loss = loss_object(y, pred)
    grads = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(grads, model.trainable_variables))


def train_model(model, train_ds, epochs=10):
    for epoch in range(epochs):
        print(epoch)
        for x, y in train_ds:
            train_step(model, optimizer, x, y)

def main():
    data = read_data()
    train, test = train_test_split(data, test_size=0.2, random_state=23)
    train_ds = generate_dataset(train, 32, True, "train")
    test_ds = generate_dataset(test, 32, False, "test")
    model = build_model()
    train_model(model, train_ds, 10)
    model.compile(loss='binary_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy']
                  )
    model.evaluate(test_ds)

main()

推荐答案

它们确实应该相同;BinaryCrossa> 使用 bin>,在文档字符串描述中明显不同;前者用于两个类标签,而后者支持任意类计数.但是,如果以预期格式传入目标,则在调用后端的 binary_crossentropy,它进行实际计算.

They should indeed work the same; BinaryCrossentropy uses binary_crossentropy, with difference apparent in docstring descriptions; former's intended for two class labels, whereas later supports an arbitrary class count. However, if passing in targets in expected format, both apply same preprocessing before calling backend's binary_crossentropy, which does the actual computing.

您观察到的差异很可能是再现性问题;确保您设置了随机种子 - 请参阅下面的功能.有关重现性的更完整答案,请参阅 此处.

The difference you observe is likely a reproducibility issue; ensure you set the random seed - see function below. For a more complete answer on reproducibility, see here.

功能

def reset_seeds(reset_graph_with_backend=None):
    if reset_graph_with_backend is not None:
        K = reset_graph_with_backend
        K.clear_session()
        tf.compat.v1.reset_default_graph()
        print("KERAS AND TENSORFLOW GRAPHS RESET")  # optional

    np.random.seed(1)
    random.seed(2)
    tf.compat.v1.set_random_seed(3)
    print("RANDOM SEEDS RESET")  # optional

<小时>

用法:

import tensorflow as tf
import tensorflow.keras.backend as K

reset_seeds(K)

这篇关于关于“二元交叉熵"的区别和“binary_crossentropy"在 tf.keras.losses 中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆