了解summary()的值(输出形状,参数#)吗? [英] Understanding the values of summary() (Output Shape, Param#)?

查看:173
本文介绍了了解summary()的值(输出形状,参数#)吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在检查summary函数的输出,并且不了解所有打印的值.

I'm checking the output of summary function and don't understand all the printed values.

例如,看下面的简单代码:

For example, look on this simple code:

x = [1, 2, 3, 4, 5]
y = [1.2, 1.8, 3.5, 3.7, 5.3]
model = Sequential()
model.add(Dense(10, input_dim=1, activation='relu'))
model.add(Dense(30, input_dim=1, activation='relu'))
model.add(Dense(10, input_dim=1, activation='relu'))
model.add(Dense(1))
model.summary()

输出:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 10)                20        
_________________________________________________________________
dense_1 (Dense)              (None, 30)                330       
_________________________________________________________________
dense_2 (Dense)              (None, 10)                310       
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 11        
=================================================================
Total params: 671
Trainable params: 671
Non-trainable params: 0
_________________________________________________________________

  1. 为什么Output Shape列下的None值是什么意思? 无"在这里是什么意思?
  2. 哪个网络的摘要中不会显示None?
  3. 为什么Params #列的含义是什么?如何计算此值?
  1. Why is the meaning of None value , under the column Output Shape? what the None mean here?
  2. Which network will not show None in the summary?
  3. Why is the meaning of the Params # column ? How this value is calculated ?

推荐答案

  1. 无"只是一个占位符,它表示网络一次可以输入多个样本.无表示此尺寸是可变的. keras模型中的第一维始终是批处理大小. ...这就是为什么在定义模型时常常会忽略此尺寸的原因.例如,当您定义input_shape =(100,200)时,实际上您是在忽略批处理大小,而是定义每个样本"的形状.

  1. The None is just a placeholder saying that the network can input more than one sample at the time. None means this dimension is variable. The first dimension in a keras model is always the batch size. ... That's why this dimension is often ignored when you define your model. For instance, when you define input_shape=(100,200) , actually you're ignoring the batch size and defining the shape of "each sample".

None.例如,如果要批量发送10张图像,则形状将为(10,64,64,3),如果将其更改为25,则形状将为(25,64,64,3)

None won't show If you set a fixed batch. As an example, if you would send in a batch of 10 images your shape would be (10, 64, 64, 3) and if you changed it to 25 you would have (25, 64, 64, 3)

对于 dense_1st层,参数数量为20. :10(输入值)+ 10(偏置值)

For the dense_1st layer , number of params is 20. This is obtained as : 10 (input values) + 10 (bias values)

对于第2层密集,参数数量为330.这可以通过以下方式获得:10 (输入值)* 30(第二层中的神经元)+ 30(偏置值) 第二层的神经元)

For dense_2nd layer, number of params is 330. This is obtained as : 10 (input values) * 30 (neurons in the second layer) + 30 (bias values for neurons in the second layer)

对于 dense_3rd层,参数数量为310.这可以通过以下方式获得:30 (输入值)* 10(第三层中的神经元)+ 10(偏置值) 第三层的神经元)

For dense_3rd layer, number of params is 310. This is obtained as : 30 (input values) * 10 (neurons in the third layer) + 10 (bias values for neurons in the third layer)

对于最终层,参数数量为11.这可以通过以下方式获得:10(输入值)* 1(第二层中的神经元)+ 1(最后一层中神经元的偏置值) )

For final layer, number of params is 11. This is obtained as : 10 (input values) * 1 (neurons in the second layer) + 1 (bias values for neurons in the final layer)

总参数= 20 + 330 + 310 + 11 = 671

这篇关于了解summary()的值(输出形状,参数#)吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆