具有Keras功能API的多输入多输出模型 [英] Multi-input Multi-output Model with Keras Functional API

查看:78
本文介绍了具有Keras功能API的多输入多输出模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如图1所示,我有3个模型,每个模型都适用于特定领域.

使用不同的数据集分别训练这3个模型.

推理是顺序的:

由于python的Multiprocess库,我试图并行化这3个模型的调用,但它非常不稳定,不建议这样做.

这是我必须确保一次完成所有操作的想法:

由于3个模型共享一个共同的预训练模型,所以我想制作一个具有多个输入和多个输出的单个模型.

如下图所示:

就像在推理期间一样,我将调用一个模型,该模型将同时执行所有3个操作.

我看到使用KERAS的Functional API可以实现,但是我不知道该怎么做.数据集的输入具有相同的维度.这些是(200,200,3)的图片.

如果任何人都有一个共享通用结构的多输入多输出模型的示例,那我很好.

UPADE

这是我的代码示例,但由于 layers的原因,它返回错误.串联(...)线,该线传播 EfficientNet 模型未考虑的形状.

  age_inputs =个layers.Input(形状=(IMG_SIZE,IMG_SIZE,3),名称="age_inputs")性别输入=图层.输入(形状=(IMG_SIZE,IMG_SIZE,3),名称="gender_inputs")motion_inputs = layers.Input(shape =(IMG_SIZE,IMG_SIZE,3),name ="emotion_inputs")输入= layers.concatenate([年龄输入,性别输入,情感输入])输入=层数.Conv2D(3,(3,3),激活="relu")(输入)模型= EfficientNetB0(include_top = False,input_tensor =输入,权重="imagenet")model.trainable =假输入=层数.GlobalAveragePooling2D(name ="avg_pool")(model.output)输入=层.BatchNormalization()(输入)top_dropout_rate = 0.2输入=层数.Dropout(top_dropout_rate,name ="top_dropout")(输入)age_outputs = layers.Dense(1,激活=线性",名称="age_pred")(输入)性别输出=层数.密集(GENDER_NUM_CLASSES,激活="softmax",名称="gender_pred")(输入)motion_outputs = layers.Dense(EMOTION_NUM_CLASSES,激活="softmax",名称="emotion_pred")(输入)模型= keras.Model(输入= [年龄输入,性别输入,情感输入],输出= [年龄输出,性别输出,情感输出],名称="EfficientNet")优化程序= keras.optimizers.Adam(learning_rate = 1e-2)model.compile(loss = {"age_pred":"mse","gender_pred":"categorical_crossentropy"," emotion_pred":"categorical_crossentropy"},优化程序=优化程序,指标= [准确性"])(age_train_images,age_train_labels),(age_test_images,age_test_labels)= reg_data_loader.load_data(...)(gender_train_images,gender_train_labels),(gender_test_images,gender_test_labels)= cat_data_loader.load_data(...)(emotion_train_images,emotion_train_labels),(emotion_test_images,emotion_test_labels)= cat_data_loader.load_data(...)model.fit({'age_inputs':age_train_images,'gender_inputs':gender_train_images,'emotion_inputs':emotion_train_images},{'age_pred':age_train_labels,'gender_pred':gender_train_labels,'emotion_pred':emotion_train_labels},validation_split = 0.2,纪元= 5,batch_size = 16) 

解决方案

我们可以在 tf中轻松地做到这一点.keras 使用其强大的功能性API.在这里,我们将指导您如何使用Functional API构建具有不同类型(分类 regression )的多输出.

根据最后一张图,您需要一个输入模型和三个不同类型的输出.为了演示,我们将使用 MNIST 这是一个手写数据集.通常是 10 个类别分类问题数据集.从中,我们将另外创建 2 类分类器(数字是 even 还是 odd )以及 1 回归部分(用于预测数字的平方,即,对于9的图像输入,应该给出近似的平方).


数据集

 将numpy导入为np将tensorflow导入为tf从tensorflow import keras从tensorflow.keras导入层(xtrain,ytrain),(_,_)= keras.datasets.mnist.load_data()#10分类器y_out_a = keras.utils.to_categorical(ytrain,num_classes = 10)#2分类器,偶数或奇数y_out_b = keras.utils.to_categorical((ytrain%2 == 0).astype(int),num_classes = 2)#回归,预测输入数字图像的平方y_out_c = tf.square(tf.cast(ytrain,tf.float32)) 

因此,我们的训练对将是 xtrain [y_out_a,y_out_b,y_out_c] ,与您上一张图表相同.


建模

让我们使用 tf的Functional API相应地构建模型.keras .请参阅下面的模型定义. MNIST 样本是 28 x 28 灰度图像.所以我们的输入就是这样设置的.我猜您的数据集可能是RGB,因此请相应地更改输入尺寸.

  input = keras.Input(shape =(28,28,1),name ="original_img")x = layers.Conv2D(16,3,激活="relu")(输入).x =层数.Conv2D(32,3,激活="relu")(x)x =层数.MaxPooling2D(3)(x)x =层数.Conv2D(32,3,激活="relu")(x)x = layers.Conv2D(16,3,激活="relu")(x)x =层.GlobalMaxPooling2D()(x)out_a = keras.layers.Dense(10,激活='softmax',名称='10cls')(x)out_b = keras.layers.Dense(2,激活='softmax',名称='2cls')(x)out_c = keras.layers.Dense(1,激活='线性',名称='1rg')(x)编码器= keras.Model(输入=输入,输出= [out_a,out_b,out_c],名称=编码器") 

 #让我们开始吧keras.utils.plot_model(编码器) 

要注意的一件事,在模型定义期间定义 out_a out_b out_c 时,我们设置其 name 变量非常重要.它们的名称分别设置为'10cls''2cls''1rg'.您也可以从上图(最后3个尾巴)中看到这一点.


编译并运行

现在,我们可以看到为什么 name 变量很重要的原因.为了运行模型,我们需要先使用适当的 loss 函数, metrics optimizer 对其进行编译.现在,如果您知道,对于分类 regression 问题, optimizer 可以相同,但对于 loss 函数和 metrics 应该更改.在具有多类型输出模型(2个分类和1个回归)的模型中,我们需要为每种类型设置适当的 loss metrics .请查看下面的操作方式.

  encoder.compile(损失= {"10cls":tf.keras.losses.CategoricalCrossentropy(),"2cls":tf.keras.losses.CategoricalCrossentropy(),"1rg":tf.keras.losses.MeanSquaredError()},指标= {"10cls":准确性","2cls":准确性","1rg":"mse"},优化程序= tf.keras.optimizers.Adam(learning_rate = 0.001)) 

请参见上述模型的每个最后输出,此处用它们的 name 变量表示.并且我们为它们设置了适当的编译.希望你理解这部分.现在,该训练模型了.

  encoder.fit(xtrain,[y_out_a,y_out_b,y_out_c],epochs = 30,verbose = 2)时代1/301875/1875-6s-损失:117.7318-10cls_loss:3.2642-4cls_loss:0.9040-1rg_loss:113.5637-10cls_accuracy:0.6057-4cls_accuracy:0.8671-1rg_mse:113.5637时代2/301875/1875-5s-损失:62.1696-10cls_loss:0.5151-4cls_loss:0.2437-1rg_loss:61.4109-10cls_accuracy:0.8845-4cls_accuracy:0.9480-1rg_mse:61.4109时代3/301875/1875-5s-损失:50.3159-10cls_loss:0.2804-4cls_loss:0.1371-1rg_loss:49.8985-10cls_accuracy:0.9295-4cls_accuracy:0.9641-1rg_mse:49.8985时代28/301875/1875-5秒-损失:15.5841-10cls_loss:0.1066-4cls_loss:0.0891-1rg_loss:15.3884-10cls_accuracy:0.9726-4cls_accuracy:0.9715-1rg_mse:15.3884时代29/301875/1875-5s-损失:15.2199-10cls_loss:0.1058-4cls_loss:0.0859-1rg_loss:15.0281-10cls_accuracy:0.9736-4cls_accuracy:0.9727-1rg_mse:15.0281时代30/301875/1875-5s-损失:15.2178-10cls_loss:0.1136-4cls_loss:0.0854-1rg_loss:15.0188-10cls_accuracy:0.9722-4cls_accuracy:0.9736-1rg_mse:15.0188< tensorflow.python.keras.callbacks.History at 0x7ff42c18e110> 

这就是最后一层的每个输出通过其关注的 loss 函数进行优化的方式.仅供参考,一件事,在 .compile 模型中,您可能需要一个基本参数:

如果我们想快速检查模型的输出层

  encoder.output[< KerasTensor:shape =(None,10)dtype = float32(由图层'10cls'创建)),< KerasTensor:shape =(None,2)dtype = float32(由图层"4cls"创建)>,< KerasTensor:shape =(None,1)dtype = float32(由图层"1rg"创建)>] 

将此 xtrain [0] (我们知道为 5 )传递给模型以进行预测.

 #我们扩展了一个批处理维度:(1、28、28、1)pred10,pred2,pred1 =编码器.predict(tf.expand_dims(xtrain [0],0))#回归:输入dgit图像的平方pred1数组([[22.098022]],dtype = float32)#偶数或奇数,肯定是奇数pred2.argmax()0#哪个数字5pred10.argmax()5 


更新

根据您的评论,我们也可以扩展上述模型以接受多输入.我们需要改变一切.为了演示,我们将使用 mnist 数据集的 xtrain xtest 样本作为多输入.

 (xtrain,ytrain),(xtest,_)= keras.datasets.mnist.load_data()xtrain = xtrain [:10000]#两个输入样本应该是相同的数字ytrain = ytrain [:10000]#两个输入样本应该是相同的数字y_out_a = keras.utils.to_categorical(ytrain,num_classes = 10)y_out_b = keras.utils.to_categorical((ytrain%2 == 0).astype(int),num_classes = 2)y_out_c = tf.square(tf.cast(ytrain,tf.float32))打印(xtrain.shape,xtest.shape)打印(y_out_a.shape,y_out_b.shape,y_out_c.shape)#(10000,28,28)(10000,28,28)#(10000,10)(10000,2)(10000,) 

接下来,我们需要修改上述模型的某些部分以采用多输入.接下来,如果现在绘制,您将看到新的图形.

  input0 = keras.Input(shape =(28,28,1),name ="img2")input1 = keras.Input(shape =(28,28,1),name ="img1")concate_input = layers.Concatenate()([input0,input1])x = layers.Conv2D(16,3,激活="relu")(concate_input).........#多输入多输出编码器= keras.Model(输入= [输入0,输入1],输出= [out_a,out_b,out_c],名称=编码器") 

现在,我们可以按以下方式训练模型

 #多输入多输出encoder.fit([xtrain,xtest],[y_out_a,y_out_b,y_out_c],epochs = 30,batch_size = 256,verbose = 2)时代1/3040/40-1s-损失:66.9731-10cls_loss:0.9619-2cls_loss:0.4412-1rg_loss:65.5699-10cls_accuracy:0.7627-2cls_accuracy:0.8815-1rg_mse:65.5699时代2/3040/40-0s-损失:60.5408-10cls_loss:0.8959-2cls_loss:0.3850-1rg_loss:59.2598-10cls_accuracy:0.7794-2cls_accuracy:0.8928-1rg_mse:59.2598时代3/3040/40-0s-损失:57.3067-10cls_loss:0.8586-2cls_loss:0.3669-1rg_loss:56.0813-10cls_accuracy:0.7856-2cls_accuracy:0.8951-1rg_mse:56.0813......时代28/3040/40-0s-损失:29.1198-10cls_loss:0.4775-2cls_loss:0.2573-1rg_loss:28.3849-10cls_accuracy:0.8616-2cls_accuracy:0.9131-1rg_mse:28.3849时代29/3040/40-0s-损失:27.5318-10cls_loss:0.4696-2cls_loss:0.2518-1rg_loss:26.8104-10cls_accuracy:0.8645-2cls_accuracy:0.9142-1rg_mse:26.8104时代30/3040/40-0s-损失:27.1581-10cls_loss:0.4620-2cls_loss:0.2446-1rg_loss:26.4515-10cls_accuracy:0.8664-2cls_accuracy:0.9158-1rg_mse:26.4515 

现在,我们可以测试多输入模型并从中获取多输出了.

  pred10,pred2,pred1 =编码器.predict([tf.expand_dims(xtrain [0],0),tf.expand_dims(xtrain [0],0)])#回归部分pred1数组([[25.13295]],dtype = float32)# 偶数或奇数pred2.argmax()0#位数pred10.argmax()5 

As described in figure 1, I have 3 models which each apply to a particular domain.

The 3 models are trained separately with different datasets.

And inference is sequential :

I tried to parallelize the call of these 3 models thanks to the Multiprocess library of python but it is very unstable and it is not advised.

Here's the idea I got to make sure to do this all at once:

As the 3 models share a common pretrained-model, I want to make a single model that has multiple inputs and multiple outputs.

As the following drawing shows:

Like that during the inference, I will call a single model which will do all 3 operations at the same time.

I saw that with The Functional API of KERAS, it is possible but I have no idea how to do that. The inputs of the datasets have the same dimension. These are pictures of (200,200,3).

If anyone has an example of a Multi-Input Multi-output model that shares a common structure, I'm all ok.

UPADE

Here is the example of my code but it returns an error because of the layers. concatenate (...) line which propagates a shape that is not taken into account by the EfficientNet model.

age_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3), name="age_inputs")
    
gender_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)
                               , name="gender_inputs")
    
emotion_inputs = layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3), 
                                name="emotion_inputs")


inputs = layers.concatenate([age_inputs, gender_inputs, emotion_inputs])
inputs = layers.Conv2D(3, (3, 3), activation="relu")(inputs)    
model = EfficientNetB0(include_top=False, 
                   input_tensor=inputs, weights="imagenet")
    

model.trainable = False

inputs = layers.GlobalAveragePooling2D(name="avg_pool")(model.output)
inputs = layers.BatchNormalization()(inputs)

top_dropout_rate = 0.2
inputs = layers.Dropout(top_dropout_rate, name="top_dropout")(inputs)

age_outputs = layers.Dense(1, activation="linear", 
                          name="age_pred")(inputs)
gender_outputs = layers.Dense(GENDER_NUM_CLASSES, 
                              activation="softmax", 
                              name="gender_pred")(inputs)
emotion_outputs = layers.Dense(EMOTION_NUM_CLASSES, activation="softmax", 
                             name="emotion_pred")(inputs)

model = keras.Model(inputs=[age_inputs, gender_inputs, emotion_inputs], 
              outputs =[age_outputs, gender_outputs, emotion_outputs], 
              name="EfficientNet")

optimizer = keras.optimizers.Adam(learning_rate=1e-2)
model.compile(loss={"age_pred" : "mse", 
                   "gender_pred":"categorical_crossentropy", 
                    "emotion_pred":"categorical_crossentropy"}, 
                   optimizer=optimizer, metrics=["accuracy"])

(age_train_images, age_train_labels), (age_test_images, age_test_labels) = reg_data_loader.load_data(...)
(gender_train_images, gender_train_labels), (gender_test_images, gender_test_labels) = cat_data_loader.load_data(...)
(emotion_train_images, emotion_train_labels), (emotion_test_images, emotion_test_labels) = cat_data_loader.load_data(...)

 model.fit({'age_inputs':age_train_images, 'gender_inputs':gender_train_images, 'emotion_inputs':emotion_train_images},
         {'age_pred':age_train_labels, 'gender_pred':gender_train_labels, 'emotion_pred':emotion_train_labels},
                 validation_split=0.2, 
                       epochs=5, 
                            batch_size=16)

解决方案

We can do that easily in tf. keras using its awesome Functional API. Here we will walk you through how to build multi-out with a different type (classification and regression) using Functional API.

According to your last diagram, you need one input model and three outputs of different types. To demonstrate, we will use MNIST which is a handwritten dataset. It's normally a 10 class classification problem data set. From it, we will create additionally 2 class classifier (whether a digit is even or odd) and also 1 regression part (which is to predict the square of a digit, i.e for image input of 9, it should give approximately it's square).


Data Set

import numpy as np 
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

(xtrain, ytrain), (_, _) = keras.datasets.mnist.load_data()

# 10 class classifier 
y_out_a = keras.utils.to_categorical(ytrain, num_classes=10) 

# 2 class classifier, even or odd 
y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2) 

# regression, predict square of an input digit image
y_out_c = tf.square(tf.cast(ytrain, tf.float32))

So, our training pairs will be xtrain and [y_out_a, y_out_b, y_out_c], same as your last diagram.


Model Building

Let's build the model accordingly using the Functional API of tf. keras. See the model definition below. The MNIST samples are a 28 x 28 grayscale image. So our input is set in that way. I'm guessing your data set is probably RGB, so change the input dimension accordingly.

input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
x = layers.GlobalMaxPooling2D()(x)

out_a = keras.layers.Dense(10, activation='softmax', name='10cls')(x)
out_b = keras.layers.Dense(2, activation='softmax', name='2cls')(x)
out_c = keras.layers.Dense(1, activation='linear', name='1rg')(x)

encoder = keras.Model( inputs = input, outputs = [out_a, out_b, out_c], name="encoder")

# Let's plot 
keras.utils.plot_model(
    encoder
)

One thing to note, while defining out_a, out_b, and out_c during model definition we set their name variable which is very important. Their names are set '10cls', '2cls', and '1rg' respectively. You can also see this from the above diagram (last 3 tails).


Compile and Run

Now, we can see why that name variable is important. In order to run the model, we need to compile it first with the proper loss function, metrics, and optimizer. Now, if you know that, for the classification and regression problem, the optimizer can be the same but for the loss function and metrics should be changed. And in our model, which has a multi-type output model (2 classifications and 1 regression), we need to set proper loss and metrics for each of these types. Please, see below how it's done.

encoder.compile(
    loss = {
        "10cls": tf.keras.losses.CategoricalCrossentropy(),
        "2cls": tf.keras.losses.CategoricalCrossentropy(),
        "1rg": tf.keras.losses.MeanSquaredError()
    },

    metrics = {
        "10cls": 'accuracy',
        "2cls": 'accuracy',
        "1rg": 'mse'
    },

    optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
)

See, each last output of our above model, which is here represented by their name variables. And we set proper compilation to them. Hope you understand this part. Now, time to train the model.

encoder.fit(xtrain, [y_out_a, y_out_b, y_out_c], epochs=30, verbose=2)

Epoch 1/30
1875/1875 - 6s - loss: 117.7318 - 10cls_loss: 3.2642 - 4cls_loss: 0.9040 - 1rg_loss: 113.5637 - 10cls_accuracy: 0.6057 - 4cls_accuracy: 0.8671 - 1rg_mse: 113.5637
Epoch 2/30
1875/1875 - 5s - loss: 62.1696 - 10cls_loss: 0.5151 - 4cls_loss: 0.2437 - 1rg_loss: 61.4109 - 10cls_accuracy: 0.8845 - 4cls_accuracy: 0.9480 - 1rg_mse: 61.4109
Epoch 3/30
1875/1875 - 5s - loss: 50.3159 - 10cls_loss: 0.2804 - 4cls_loss: 0.1371 - 1rg_loss: 49.8985 - 10cls_accuracy: 0.9295 - 4cls_accuracy: 0.9641 - 1rg_mse: 49.8985


Epoch 28/30
1875/1875 - 5s - loss: 15.5841 - 10cls_loss: 0.1066 - 4cls_loss: 0.0891 - 1rg_loss: 15.3884 - 10cls_accuracy: 0.9726 - 4cls_accuracy: 0.9715 - 1rg_mse: 15.3884
Epoch 29/30
1875/1875 - 5s - loss: 15.2199 - 10cls_loss: 0.1058 - 4cls_loss: 0.0859 - 1rg_loss: 15.0281 - 10cls_accuracy: 0.9736 - 4cls_accuracy: 0.9727 - 1rg_mse: 15.0281
Epoch 30/30
1875/1875 - 5s - loss: 15.2178 - 10cls_loss: 0.1136 - 4cls_loss: 0.0854 - 1rg_loss: 15.0188 - 10cls_accuracy: 0.9722 - 4cls_accuracy: 0.9736 - 1rg_mse: 15.0188
<tensorflow.python.keras.callbacks.History at 0x7ff42c18e110>

That's how each of the outputs of the last layer optimizes by their concern loss function. FYI, one thing to mention, there is an essential parameter while .compile the model which you might need: loss_weights - to weight the loss contributions of different model outputs. See my other answer here on this.


Prediction / Inference

Let's see some output. We now hope this model will predict 3 things: (1) is what the digit, (2) is it even or odd, and (3) its square value.

import matplotlib.pyplot as plt
plt.imshow(xtrain[0])

If we like to quickly check the output layers of our model

encoder.output

[<KerasTensor: shape=(None, 10) dtype=float32 (created by layer '10cls')>,
 <KerasTensor: shape=(None, 2) dtype=float32 (created by layer '4cls')>,
 <KerasTensor: shape=(None, 1) dtype=float32 (created by layer '1rg')>]

Passing this xtrain[0] (which we know 5) to the model to do predictions.

# we expand for a batch dimension: (1, 28, 28, 1)
pred10, pred2, pred1 = encoder.predict(tf.expand_dims(xtrain[0], 0))

# regression: square of the input dgit image 
pred1 
array([[22.098022]], dtype=float32)

# even or odd, surely odd 
pred2.argmax()
0

# which number, surely 5
pred10.argmax()
5


Update

Based on your comment, we can extend the above model to take multi-input too. We need to change things. To demonstrate, we will use xtrain and xtest samples of the mnist data set to the model as a multi-input.

(xtrain, ytrain), (xtest, _) = keras.datasets.mnist.load_data()

xtrain = xtrain[:10000] # both input sample should be same number 
ytrain = ytrain[:10000] # both input sample should be same number

y_out_a = keras.utils.to_categorical(ytrain, num_classes=10)
y_out_b = keras.utils.to_categorical((ytrain % 2 == 0).astype(int), num_classes=2)
y_out_c = tf.square(tf.cast(ytrain, tf.float32))

print(xtrain.shape, xtest.shape) 
print(y_out_a.shape, y_out_b.shape, y_out_c.shape)
# (10000, 28, 28) (10000, 28, 28)
# (10000, 10) (10000, 2) (10000,)

Next, we need to modify some parts of the above model to take multi-input. And next if you now plot, you will see the new graph.

input0 = keras.Input(shape=(28, 28, 1), name="img2")
input1 = keras.Input(shape=(28, 28, 1), name="img1")
concate_input = layers.Concatenate()([input0, input1])

x = layers.Conv2D(16, 3, activation="relu")(concate_input)
...
...
...
# multi-input , multi-output
encoder = keras.Model( inputs = [input0, input1], 
                       outputs = [out_a, out_b, out_c], name="encoder")

Now, we can train the model as follows

# multi-input, multi-output
encoder.fit([xtrain, xtest], [y_out_a, y_out_b, y_out_c], 
             epochs=30, batch_size = 256, verbose=2)

Epoch 1/30
40/40 - 1s - loss: 66.9731 - 10cls_loss: 0.9619 - 2cls_loss: 0.4412 - 1rg_loss: 65.5699 - 10cls_accuracy: 0.7627 - 2cls_accuracy: 0.8815 - 1rg_mse: 65.5699
Epoch 2/30
40/40 - 0s - loss: 60.5408 - 10cls_loss: 0.8959 - 2cls_loss: 0.3850 - 1rg_loss: 59.2598 - 10cls_accuracy: 0.7794 - 2cls_accuracy: 0.8928 - 1rg_mse: 59.2598
Epoch 3/30
40/40 - 0s - loss: 57.3067 - 10cls_loss: 0.8586 - 2cls_loss: 0.3669 - 1rg_loss: 56.0813 - 10cls_accuracy: 0.7856 - 2cls_accuracy: 0.8951 - 1rg_mse: 56.0813
...
...
Epoch 28/30
40/40 - 0s - loss: 29.1198 - 10cls_loss: 0.4775 - 2cls_loss: 0.2573 - 1rg_loss: 28.3849 - 10cls_accuracy: 0.8616 - 2cls_accuracy: 0.9131 - 1rg_mse: 28.3849
Epoch 29/30
40/40 - 0s - loss: 27.5318 - 10cls_loss: 0.4696 - 2cls_loss: 0.2518 - 1rg_loss: 26.8104 - 10cls_accuracy: 0.8645 - 2cls_accuracy: 0.9142 - 1rg_mse: 26.8104
Epoch 30/30
40/40 - 0s - loss: 27.1581 - 10cls_loss: 0.4620 - 2cls_loss: 0.2446 - 1rg_loss: 26.4515 - 10cls_accuracy: 0.8664 - 2cls_accuracy: 0.9158 - 1rg_mse: 26.4515

Now, we can test the multi-input model and get multi-out from it.

pred10, pred2, pred1 = encoder.predict(
    [
         tf.expand_dims(xtrain[0], 0),
         tf.expand_dims(xtrain[0], 0)
    ]
)

# regression part 
pred1
array([[25.13295]], dtype=float32)

# even or odd 
pred2.argmax()
0

# what digit 
pred10.argmax()
5

这篇关于具有Keras功能API的多输入多输出模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆