带有 Keras Functional API 的多输入多输出模型 [英] Multi-input Multi-output Model with Keras Functional API

查看:26
本文介绍了带有 Keras Functional API 的多输入多输出模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如图 1 所示,我有 3 个模型,每个模型都适用于特定领域.

这 3 个模型分别使用不同的数据集进行训练.

并且推理是连续的:

由于python的多进程库,我尝试并行化这3个模型的调用,但它非常不稳定,不建议这样做.

这是我必须确保一次完成所有这些的想法:

由于 3 个模型共享一个通用的预训练模型,我想制作一个具有多个输入和多个输出的模型.

如下图所示:

就像在推理过程中一样,我将调用一个模型,该模型将同时执行所有 3 个操作.

我在 KERAS 的函数式 API 中看到,这是可能的,但我不知道如何做到这一点.数据集的输入具有相同的维度.这些是 (200,200,3) 的图片.

如果有人有一个共享通用结构的多输入多输出模型的例子,我没问题.

升级

这是我的代码示例,但由于 层而返回错误.连接 (...) 行,它传播 EfficientNet 模型未考虑的形状.

age_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3), name=age_inputs")性别输入=层.输入(形状=(IMG_SIZE,IMG_SIZE,3), name="gender_inputs")情绪输入=层数.输入(形状=(IMG_SIZE,IMG_SIZE,3),name="emotion_inputs")输入 = layer.concatenate([age_inputs,gender_inputs,emotion_inputs])输入 = layer.Conv2D(3, (3, 3), activation=relu")(inputs)模型 = EfficientNetB0(include_top=False,input_tensor=inputs, weights=imagenet")model.trainable = 假输入=layers.GlobalAveragePooling2D(name=avg_pool")(model.output)输入=layers.BatchNormalization()(输入)top_dropout_rate = 0.2输入 = layer.Dropout(top_dropout_rate, name=top_dropout")(inputs)age_outputs = layers.Dense(1, activation=linear",名称=age_pred")(输入)性别输出 = 层数.密集(GENDER_NUM_CLASSES,激活=softmax",name="gender_pred")(输入)Emotion_outputs = layers.Dense(EMOTION_NUM_CLASSES,激活=softmax",名称=emotion_pred")(输入)模型 = keras.Model(inputs=[age_inputs,gender_inputs,emotion_inputs],输出 =[age_outputs,gender_outputs,emotion_outputs],名称=EfficientNet")优化器 = keras.optimizers.Adam(learning_rate=1e-2)model.compile(loss={age_pred": mse",gender_pred":categorical_crossentropy",emotion_pred":categorical_crossentropy"},优化器=优化器,指标=[准确度"])(age_train_images, age_train_labels), (age_test_images, age_test_labels) = reg_data_loader.load_data(...)(gender_train_images、gender_train_labels)、(gender_test_images、gender_test_labels) = cat_data_loader.load_data(...)(emotion_train_images,emotion_train_labels), (emotion_test_images,emotion_test_labels) = cat_data_loader.load_data(...)model.fit({'age_inputs':age_train_images, 'gender_inputs':gender_train_images, 'emotion_inputs':emotion_train_images},{'age_pred':age_train_labels, 'gender_pred':gender_train_labels, 'emotion_pred':emotion_train_labels},验证_拆分=0.2,时代=5,批量大小=16)

解决方案

我们可以在 tf.keras 使用其出色的功能 API.在这里,我们将引导您了解如何使用 Functional API 构建具有不同类型(classificationregression)的 multi-out.

根据您的上一个图表,您需要一个输入模型和三个不同类型的输出.为了演示,我们将使用 MNIST 这是一个手写数据集.它通常是 10 类分类问题数据集.从中,我们将额外创建2类分类器(数字是even还是odd)以及1回归部分(预测一个数字的平方,即对于9的图像输入,它应该给出大约它的平方).


数据集

将 numpy 导入为 np将张量流导入为 tf从 tensorflow 导入 keras从 tensorflow.keras 导入层(xtrain, ytrain), (_, _) = keras.datasets.mnist.load_data()# 10 类分类器y_out_a = keras.utils.to_categorical(ytrain, num_classes=10)# 2 类分类器,偶数或奇数y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2)# 回归,预测输入数字图像的平方y_out_c = tf.square(tf.cast(ytrain, tf.float32))

因此,我们的训练对将是 xtrain[y_out_a, y_out_b, y_out_c],与您上一个图表相同.


模型构建

让我们使用 tf.Functional API 相应地构建模型.keras.请参阅下面的模型定义.MNIST 样本是一个 28 x 28 灰度图像.所以我们的输入就是这样设置的.我猜你的数据集可能是 RGB,所以相应地更改输入维度.

input = keras.Input(shape=(28, 28, 1), name=original_img")x = layer.Conv2D(16, 3, activation=relu")(input)x = layer.Conv2D(32, 3, activation=relu")(x)x = layer.MaxPooling2D(3)(x)x = layer.Conv2D(32, 3, activation=relu")(x)x = layer.Conv2D(16, 3, activation=relu")(x)x = layer.GlobalMaxPooling2D()(x)out_a = keras.layers.Dense(10, activation='softmax', name='10cls')(x)out_b = keras.layers.Dense(2, activation='softmax', name='2cls')(x)out_c = keras.layers.Dense(1, activation='linear', name='1rg')(x)编码器 = keras.Model(输入 = 输入,输出 = [out_a,out_b,out_c],名称 =编码器")

# 我们来画图keras.utils.plot_model(编码器)

有一点需要注意,在模型定义过程中定义out_aout_bout_c时,我们设置了它们的name 变量,这是非常重要的.它们的名称分别设置为 '10cls''2cls''1rg'.您也可以从上图中看到这一点(最后 3 个尾巴).


编译运行

现在,我们可以明白为什么 name 变量很重要.为了运行模型,我们需要首先使用适当的loss 函数、metricsoptimizer 编译它.现在,如果你知道,对于 classificationregression 问题,optimizer 可以是相同的,但对于 loss 函数和 metrics 应该改变.在我们的模型中,它有一个多类型输出模型(2 个分类和 1 个回归),我们需要为这些类型中的每一个设置适当的 lossmetrics.请在下面查看它是如何完成的.

encoder.compile(损失 = {10cls":tf.keras.losses.CategoricalCrossentropy(),2cls":tf.keras.losses.CategoricalCrossentropy(),1rg":tf.keras.losses.MeanSquaredError()},指标 = {10cls":准确度",2cls":准确度",1rg":mse"},优化器 = tf.keras.optimizers.Adam(learning_rate=0.001))

看,我们上面模型的每个最后输出,这里由它们的 name 变量表示.我们为它们设置了适当的编译.希望你理解这部分.现在,是时候训练模型了.

encoder.fit(xtrain, [y_out_a, y_out_b, y_out_c], epochs=30,verbose=2)时代 1/301875/1875 - 6s - 损失:117.7318 - 10cls_loss:3.2642 - 4cls_loss:0.9040 - 1rg_loss:113.5637 - 10cls_accuracy:0.6057 - 4 ac 17s 3 _ 1 ac 3 r 5 ac 3时代 2/301875/1875 - 5s - 损失:62.1696 - 10cls_loss:0.5151 - 4cls_loss:0.2437 - 1rg_loss:61.4109 - 10cls_accuracy:0.8845 - 4cls_19r.4cls_19r时代 3/301875/1875 - 5s - 损失:50.3159 - 10cls_loss:0.2804 - 4cls_loss:0.1371 - 1rg_loss:49.8985 - 10cls_accuracy:0.9295 - acy:4109r.8r.9r.6m_accuracy:0.1371 - 1rg_loss:49.8985 - 10cls_accuracy:时代 28/301875/1875 - 5s - 损失:15.5841 - 10cls_loss:0.1066 - 4cls_loss:0.0891 - 1rg_loss:15.3884 - 10cls_accuracy:0.9726 - 4cls_3r_18rc:15m_7m时代 29/301875/1875 - 5s - 损失:15.2199 - 10cls_loss:0.1058 - 4cls_loss:0.0859 - 1rg_loss:15.0281 - 10cls_accuracy:0.9736 - 4cls_20r 17m _17m _accuracy:0.0859时代 30/301875/1875 - 5s - 损失:15.2178 - 10cls_loss:0.1136 - 4cls_loss:0.0854 - 1rg_loss:15.0188 - 10cls_accuracy:0.9722 - 4cls_18r-accuracy:4cls_18r<tensorflow.python.keras.callbacks.History at 0x7ff42c18e110>

这就是最后一层的每个输出如何通过它们关注的 loss 函数进行优化.仅供参考,有一件事要提到,在 .compile 模型时有一个基本参数,您可能需要:

如果我们想快速检查我们模型的输出层

encoder.output[<KerasTensor: shape=(None, 10) dtype=float32 (由层'10cls'创建)>,<KerasTensor: shape=(None, 2) dtype=float32 (由层'4cls'创建)>,<KerasTensor: shape=(None, 1) dtype=float32(由图层1rg"创建)>]

将此xtrain[0](我们知道5)传递给模型以进行预测.

# 我们扩展一个批次维度:(1, 28, 28, 1)pred10, pred2, pred1 = encoder.predict(tf.expand_dims(xtrain[0], 0))# 回归:输入 dgit 图像的平方pred1数组([[22.098022]],dtype=float32)# 偶数或奇数,肯定是奇数pred2.argmax()0# 哪个数字,肯定是 5pred10.argmax()5


更新

根据您的评论,我们可以扩展上述模型以进行多输入.我们需要改变一些事情.为了演示,我们将使用 mnist 数据集的 xtrainxtest 样本作为多输入模型.

(xtrain, ytrain), (xtest, _) = keras.datasets.mnist.load_data()xtrain = xtrain[:10000] # 两个输入样本应该是相同的数字ytrain = ytrain[:10000] # 两个输入样本应该是相同的数字y_out_a = keras.utils.to_categorical(ytrain, num_classes=10)y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2)y_out_c = tf.square(tf.cast(ytrain, tf.float32))打印(xtrain.shape,xtest.shape)打印(y_out_a.shape,y_out_b.shape,y_out_c.shape)# (10000, 28, 28) (10000, 28, 28)# (10000, 10) (10000, 2) (10000,)

接下来,我们需要修改上述模型的某些部分以采用多输入.接下来,如果您现在绘图,您将看到新图表.

input0 = keras.Input(shape=(28, 28, 1), name=img2")input1 = keras.Input(shape=(28, 28, 1), name=img1")concate_input = layers.Concatenate()([input0, input1])x = layer.Conv2D(16, 3, activation=relu")(concate_input).........# 多输入多输出编码器 = keras.Model( 输入 = [input0, input1],输出 = [out_a,out_b,out_c],名称 =编码器")

现在,我们可以如下训练模型

#多输入多输出编码器.fit([xtrain, xtest], [y_out_a, y_out_b, y_out_c],epochs=30,batch_size = 256,verbose=2)时代 1/3040/40 - 1s - 损失:66.9731 - 10cls_loss:0.9619 - 2cls_loss:0.4412 - 1rg_loss:65.5699 - 10cls_accuracy:0.7627 - 2cls_accuracy:-6s_accuracy:-6995:时代 2/3040/40 - 0s - 损失: 60.5408 - 10cls_loss: 0.8959 - 2cls_loss: 0.3850 - 1rg_loss: 59.2598 - 10cls_accuracy: 0.7794 - 2cls_accuracy: -2cls_accuracy2:-905928时代 3/3040/40 - 0s - 损失: 57.3067 - 10cls_loss: 0.8586 - 2cls_loss: 0.3669 - 1rg_loss: 56.0813 - 10cls_accuracy: 0.7856 - 2cls_accuracy: 10m9510:10......时代 28/3040/40 - 0s - 损失: 29.1198 - 10cls_loss: 0.4775 - 2cls_loss: 0.2573 - 1rg_loss: 28.3849 - 10cls_accuracy: 0.8616 - 2cls_accuracy: - 2cls_accuracy:-800000.2573:时代 29/3040/40 - 0s - 损失: 27.5318 - 10cls_loss: 0.4696 - 2cls_loss: 0.2518 - 1rg_loss: 26.8104 - 10cls_accuracy: 0.8645 - 2cls_accuracy: - 2cls_accuracy: 0.2518.0时代 30/3040/40 - 0s - 损失: 27.1581 - 10cls_loss: 0.4620 - 2cls_loss: 0.2446 - 1rg_loss: 26.4515 - 10cls_accuracy: 0.8664 - 2cls_accurracy: - 2cls_accuracy5:1258

现在,我们可以测试多输入模型并从中得到多输出.

pred10, pred2, pred1 = encoder.predict([tf.expand_dims(xtrain[0], 0),tf.expand_dims(xtrain[0], 0)])# 回归部分pred1数组([[25.13295]],dtype=float32)# 偶数或奇数pred2.argmax()0#什么数字pred10.argmax()5

As described in figure 1, I have 3 models which each apply to a particular domain.

The 3 models are trained separately with different datasets.

And inference is sequential :

I tried to parallelize the call of these 3 models thanks to the Multiprocess library of python but it is very unstable and it is not advised.

Here's the idea I got to make sure to do this all at once:

As the 3 models share a common pretrained-model, I want to make a single model that has multiple inputs and multiple outputs.

As the following drawing shows:

Like that during the inference, I will call a single model which will do all 3 operations at the same time.

I saw that with The Functional API of KERAS, it is possible but I have no idea how to do that. The inputs of the datasets have the same dimension. These are pictures of (200,200,3).

If anyone has an example of a Multi-Input Multi-output model that shares a common structure, I'm all ok.

UPADE

Here is the example of my code but it returns an error because of the layers. concatenate (...) line which propagates a shape that is not taken into account by the EfficientNet model.

age_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3), name="age_inputs")
    
gender_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)
                               , name="gender_inputs")
    
emotion_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3), 
                                name="emotion_inputs")


inputs = layers.concatenate([age_inputs, gender_inputs, emotion_inputs])
inputs = layers.Conv2D(3, (3, 3), activation="relu")(inputs)    
model = EfficientNetB0(include_top=False, 
                   input_tensor=inputs, weights="imagenet")
    

model.trainable = False

inputs = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
inputs = layers.BatchNormalization()(inputs)

top_dropout_rate = 0.2
inputs = layers.Dropout(top_dropout_rate, name="top_dropout")(inputs)

age_outputs = layers.Dense(1, activation="linear", 
                          name="age_pred")(inputs)
gender_outputs = layers.Dense(GENDER_NUM_CLASSES, 
                              activation="softmax", 
                              name="gender_pred")(inputs)
emotion_outputs = layers.Dense(EMOTION_NUM_CLASSES, activation="softmax", 
                             name="emotion_pred")(inputs)

model = keras.Model(inputs=[age_inputs, gender_inputs, emotion_inputs], 
              outputs =[age_outputs, gender_outputs, emotion_outputs], 
              name="EfficientNet")

optimizer = keras.optimizers.Adam(learning_rate=1e-2)
model.compile(loss={"age_pred" : "mse", 
                   "gender_pred":"categorical_crossentropy", 
                    "emotion_pred":"categorical_crossentropy"}, 
                   optimizer=optimizer, metrics=["accuracy"])

(age_train_images, age_train_labels), (age_test_images, age_test_labels) = reg_data_loader.load_data(...)
(gender_train_images, gender_train_labels), (gender_test_images, gender_test_labels) = cat_data_loader.load_data(...)
(emotion_train_images, emotion_train_labels), (emotion_test_images, emotion_test_labels) = cat_data_loader.load_data(...)

 model.fit({'age_inputs':age_train_images, 'gender_inputs':gender_train_images, 'emotion_inputs':emotion_train_images},
         {'age_pred':age_train_labels, 'gender_pred':gender_train_labels, 'emotion_pred':emotion_train_labels},
                 validation_split=0.2, 
                       epochs=5, 
                            batch_size=16)

解决方案

We can do that easily in tf. keras using its awesome Functional API. Here we will walk you through how to build multi-out with a different type (classification and regression) using Functional API.

According to your last diagram, you need one input model and three outputs of different types. To demonstrate, we will use MNIST which is a handwritten dataset. It's normally a 10 class classification problem data set. From it, we will create additionally 2 class classifier (whether a digit is even or odd) and also 1 regression part (which is to predict the square of a digit, i.e for image input of 9, it should give approximately it's square).


Data Set

import numpy as np 
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

(xtrain, ytrain), (_, _) = keras.datasets.mnist.load_data()

# 10 class classifier 
y_out_a = keras.utils.to_categorical(ytrain, num_classes=10) 

# 2 class classifier, even or odd 
y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2) 

# regression, predict square of an input digit image
y_out_c = tf.square(tf.cast(ytrain, tf.float32))

So, our training pairs will be xtrain and [y_out_a, y_out_b, y_out_c], same as your last diagram.


Model Building

Let's build the model accordingly using the Functional API of tf. keras. See the model definition below. The MNIST samples are a 28 x 28 grayscale image. So our input is set in that way. I'm guessing your data set is probably RGB, so change the input dimension accordingly.

input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
x = layers.GlobalMaxPooling2D()(x)

out_a = keras.layers.Dense(10, activation='softmax', name='10cls')(x)
out_b = keras.layers.Dense(2, activation='softmax', name='2cls')(x)
out_c = keras.layers.Dense(1, activation='linear', name='1rg')(x)

encoder = keras.Model( inputs = input, outputs = [out_a, out_b, out_c], name="encoder")

# Let's plot 
keras.utils.plot_model(
    encoder
)

One thing to note, while defining out_a, out_b, and out_c during model definition we set their name variable which is very important. Their names are set '10cls', '2cls', and '1rg' respectively. You can also see this from the above diagram (last 3 tails).


Compile and Run

Now, we can see why that name variable is important. In order to run the model, we need to compile it first with the proper loss function, metrics, and optimizer. Now, if you know that, for the classification and regression problem, the optimizer can be the same but for the loss function and metrics should be changed. And in our model, which has a multi-type output model (2 classifications and 1 regression), we need to set proper loss and metrics for each of these types. Please, see below how it's done.

encoder.compile(
    loss = {
        "10cls": tf.keras.losses.CategoricalCrossentropy(),
        "2cls": tf.keras.losses.CategoricalCrossentropy(),
        "1rg": tf.keras.losses.MeanSquaredError()
    },

    metrics = {
        "10cls": 'accuracy',
        "2cls": 'accuracy',
        "1rg": 'mse'
    },

    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
)

See, each last output of our above model, which is here represented by their name variables. And we set proper compilation to them. Hope you understand this part. Now, time to train the model.

encoder.fit(xtrain, [y_out_a, y_out_b, y_out_c], epochs=30, verbose=2)

Epoch 1/30
1875/1875 - 6s - loss: 117.7318 - 10cls_loss: 3.2642 - 4cls_loss: 0.9040 - 1rg_loss: 113.5637 - 10cls_accuracy: 0.6057 - 4cls_accuracy: 0.8671 - 1rg_mse: 113.5637
Epoch 2/30
1875/1875 - 5s - loss: 62.1696 - 10cls_loss: 0.5151 - 4cls_loss: 0.2437 - 1rg_loss: 61.4109 - 10cls_accuracy: 0.8845 - 4cls_accuracy: 0.9480 - 1rg_mse: 61.4109
Epoch 3/30
1875/1875 - 5s - loss: 50.3159 - 10cls_loss: 0.2804 - 4cls_loss: 0.1371 - 1rg_loss: 49.8985 - 10cls_accuracy: 0.9295 - 4cls_accuracy: 0.9641 - 1rg_mse: 49.8985


Epoch 28/30
1875/1875 - 5s - loss: 15.5841 - 10cls_loss: 0.1066 - 4cls_loss: 0.0891 - 1rg_loss: 15.3884 - 10cls_accuracy: 0.9726 - 4cls_accuracy: 0.9715 - 1rg_mse: 15.3884
Epoch 29/30
1875/1875 - 5s - loss: 15.2199 - 10cls_loss: 0.1058 - 4cls_loss: 0.0859 - 1rg_loss: 15.0281 - 10cls_accuracy: 0.9736 - 4cls_accuracy: 0.9727 - 1rg_mse: 15.0281
Epoch 30/30
1875/1875 - 5s - loss: 15.2178 - 10cls_loss: 0.1136 - 4cls_loss: 0.0854 - 1rg_loss: 15.0188 - 10cls_accuracy: 0.9722 - 4cls_accuracy: 0.9736 - 1rg_mse: 15.0188
<tensorflow.python.keras.callbacks.History at 0x7ff42c18e110>

That's how each of the outputs of the last layer optimizes by their concern loss function. FYI, one thing to mention, there is an essential parameter while .compile the model which you might need: loss_weights - to weight the loss contributions of different model outputs. See my other answer here on this.


Prediction / Inference

Let's see some output. We now hope this model will predict 3 things: (1) is what the digit, (2) is it even or odd, and (3) its square value.

import matplotlib.pyplot as plt
plt.imshow(xtrain[0])

If we like to quickly check the output layers of our model

encoder.output

[<KerasTensor: shape=(None, 10) dtype=float32 (created by layer '10cls')>,
 <KerasTensor: shape=(None, 2) dtype=float32 (created by layer '4cls')>,
 <KerasTensor: shape=(None, 1) dtype=float32 (created by layer '1rg')>]

Passing this xtrain[0] (which we know 5) to the model to do predictions.

# we expand for a batch dimension: (1, 28, 28, 1)
pred10, pred2, pred1 = encoder.predict(tf.expand_dims(xtrain[0], 0))

# regression: square of the input dgit image 
pred1 
array([[22.098022]], dtype=float32)

# even or odd, surely odd 
pred2.argmax()
0

# which number, surely 5
pred10.argmax()
5


Update

Based on your comment, we can extend the above model to take multi-input too. We need to change things. To demonstrate, we will use xtrain and xtest samples of the mnist data set to the model as a multi-input.

(xtrain, ytrain), (xtest, _) = keras.datasets.mnist.load_data()

xtrain = xtrain[:10000] # both input sample should be same number 
ytrain = ytrain[:10000] # both input sample should be same number

y_out_a = keras.utils.to_categorical(ytrain, num_classes=10)
y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2)
y_out_c = tf.square(tf.cast(ytrain, tf.float32))

print(xtrain.shape, xtest.shape) 
print(y_out_a.shape, y_out_b.shape, y_out_c.shape)
# (10000, 28, 28) (10000, 28, 28)
# (10000, 10) (10000, 2) (10000,)

Next, we need to modify some parts of the above model to take multi-input. And next if you now plot, you will see the new graph.

input0 = keras.Input(shape=(28, 28, 1), name="img2")
input1 = keras.Input(shape=(28, 28, 1), name="img1")
concate_input = layers.Concatenate()([input0, input1])

x = layers.Conv2D(16, 3, activation="relu")(concate_input)
...
...
...
# multi-input , multi-output
encoder = keras.Model( inputs = [input0, input1], 
                       outputs = [out_a, out_b, out_c], name="encoder")

Now, we can train the model as follows

# multi-input, multi-output
encoder.fit([xtrain, xtest], [y_out_a, y_out_b, y_out_c], 
             epochs=30, batch_size = 256, verbose=2)

Epoch 1/30
40/40 - 1s - loss: 66.9731 - 10cls_loss: 0.9619 - 2cls_loss: 0.4412 - 1rg_loss: 65.5699 - 10cls_accuracy: 0.7627 - 2cls_accuracy: 0.8815 - 1rg_mse: 65.5699
Epoch 2/30
40/40 - 0s - loss: 60.5408 - 10cls_loss: 0.8959 - 2cls_loss: 0.3850 - 1rg_loss: 59.2598 - 10cls_accuracy: 0.7794 - 2cls_accuracy: 0.8928 - 1rg_mse: 59.2598
Epoch 3/30
40/40 - 0s - loss: 57.3067 - 10cls_loss: 0.8586 - 2cls_loss: 0.3669 - 1rg_loss: 56.0813 - 10cls_accuracy: 0.7856 - 2cls_accuracy: 0.8951 - 1rg_mse: 56.0813
...
...
Epoch 28/30
40/40 - 0s - loss: 29.1198 - 10cls_loss: 0.4775 - 2cls_loss: 0.2573 - 1rg_loss: 28.3849 - 10cls_accuracy: 0.8616 - 2cls_accuracy: 0.9131 - 1rg_mse: 28.3849
Epoch 29/30
40/40 - 0s - loss: 27.5318 - 10cls_loss: 0.4696 - 2cls_loss: 0.2518 - 1rg_loss: 26.8104 - 10cls_accuracy: 0.8645 - 2cls_accuracy: 0.9142 - 1rg_mse: 26.8104
Epoch 30/30
40/40 - 0s - loss: 27.1581 - 10cls_loss: 0.4620 - 2cls_loss: 0.2446 - 1rg_loss: 26.4515 - 10cls_accuracy: 0.8664 - 2cls_accuracy: 0.9158 - 1rg_mse: 26.4515

Now, we can test the multi-input model and get multi-out from it.

pred10, pred2, pred1 = encoder.predict(
    [
         tf.expand_dims(xtrain[0], 0),
         tf.expand_dims(xtrain[0], 0)
    ]
)

# regression part 
pred1
array([[25.13295]], dtype=float32)

# even or odd 
pred2.argmax()
0

# what digit 
pred10.argmax()
5

这篇关于带有 Keras Functional API 的多输入多输出模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆