model.summary() 的输出不是预期的 tensorflow 2 [英] output of model.summary() is not as expected tensorflow 2
问题描述
我已经定义了一个复杂的深度学习模型,但就这个问题而言,我将使用一个简单的模型.
I've defined a complex deep learning model, but for the purpose of this question, I'll use a simple one.
考虑以下事项:
import tensorflow as tf
from tensorflow.keras import layers, models
def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
in_x = layers.Input(shape=(in_size,))
hidden_x = models.Sequential(name="hidden_layers")
for i, num_h in enumerate(hidden_sizes):
hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
hidden_x.add(layers.Activation('relu'))
hidden_x.add(layers.Dropout(dropout_prob))
out_x = layers.Dense(num_classes, activation='softmax', name='baseline')
return models.Model(inputs=in_x, outputs=out_x(hidden_x(in_x)))
我将通过以下方式调用该函数:
I will call the function in the following manner:
mdl = simpleMLP(28*28, [500, 300], 10)
现在,当我执行 mdl.summary()
时,我得到以下信息:
Now when I do mdl.summary()
I get the following:
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________
hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________
baseline (Dense) (None, 10) 3010
=================================================================
Total params: 545,810
Trainable params: 545,810
Non-trainable params: 0
_________________________________________________________________
问题是Sequential
块被压缩,只显示最后一层,但显示参数的总和.
The problem is that the Sequential
block is condensed and showing only the last layer but the sum total of parameters.
在我的复杂模型中,我有多个 Sequential
块都被隐藏了.
In my complex model, I have multiple Sequential
blocks that are all hidden.
有没有办法让它更冗长?我在模型定义中做错了什么吗?
Is there a way to make it be more verbose? Am I doing something wrong in the model definition?
当使用 pytorch
时,我没有看到相同的行为,给出以下示例(取自 这里):
When using pytorch
I don't see the same behaviour, given the following example (taken from here):
import torch
import torch.nn as nn
class MyCNNClassifier(nn.Module):
def __init__(self, in_c, n_classes):
super().__init__()
self.conv_block1 = nn.Sequential(
nn.Conv2d(in_c, 32, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(32),
nn.ReLU()
)
self.conv_block2 = nn.Sequential(
nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.decoder = nn.Sequential(
nn.Linear(32 * 28 * 28, 1024),
nn.Sigmoid(),
nn.Linear(1024, n_classes)
)
def forward(self, x):
x = self.conv_block1(x)
x = self.conv_block2(x)
x = x.view(x.size(0), -1) # flat
x = self.decoder(x)
return x
打印时我得到:
MyCNNClassifier(
(conv_block1): Sequential(
(0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(conv_block2): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(decoder): Sequential(
(0): Linear(in_features=25088, out_features=1024, bias=True)
(1): Sigmoid()
(2): Linear(in_features=1024, out_features=10, bias=True)
)
)
推荐答案
Tensorflow 2.x 中的模型摘要没有任何问题.
There is nothing wrong in model summary in Tensorflow 2.x.
import tensorflow as tf
from tensorflow.keras import layers, models
def simpleMLP(in_size, hidden_sizes, num_classes, dropout_prob=0.5):
in_x = layers.Input(shape=(in_size,))
hidden_x = models.Sequential(name="hidden_layers")
for i, num_h in enumerate(hidden_sizes):
hidden_x.add(layers.Dense(num_h, input_shape=(in_size,) if i == 0 else []))
hidden_x.add(layers.Activation('relu'))
hidden_x.add(layers.Dropout(dropout_prob))
out_x = layers.Dense(num_classes, activation='softmax', name='baseline')
return models.Model(inputs=in_x, outputs=out_x(hidden_x(in_x)))
mdl = simpleMLP(28*28, [500, 300], 10)
mdl.summary()
输出:
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 784)] 0
_________________________________________________________________
hidden_layers (Sequential) (None, 300) 542800
_________________________________________________________________
baseline (Dense) (None, 10) 3010
=================================================================
Total params: 545,810
Trainable params: 545,810
Non-trainable params: 0
_________________________________________________________________
您可以使用 get_layer 来检索图层它的名称或索引.
You can use get_layer to retrieve a layer on either its name or index.
如果同时提供了名称和索引,则索引优先.索引基于水平图遍历的顺序(自下而上).
If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).
这里获取Sequential
层(即在mdl中索引为1)的详细信息,您可以尝试
Here to get Sequential
layer (i.e. indexed at 1 in mdl) details, you can try
mdl.get_layer(index=1).summary()
输出:
Model: "hidden_layers"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 500) 392500
_________________________________________________________________
activation_2 (Activation) (None, 500) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 500) 0
_________________________________________________________________
dense_3 (Dense) (None, 300) 150300
_________________________________________________________________
activation_3 (Activation) (None, 300) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 300) 0
=================================================================
Total params: 542,800
Trainable params: 542,800
Non-trainable params: 0
_________________________________________________________________
这篇关于model.summary() 的输出不是预期的 tensorflow 2的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!