keras model.get_weight没有以预期的尺寸返回结果 [英] keras model.get_weight is not returning results in expected dimensions

查看:448
本文介绍了keras model.get_weight没有以预期的尺寸返回结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用keras对mnist数据集进行分类.我有兴趣对训练后生成的权重矩阵进行一些操作,但某些层的权重矩阵看起来好像没有完全连接.

I am doing classification over mnist dataset using keras. I am interested in doing some operation on weight matrix generated after the training but some layers weight matrix looks like they are not fully connected.

model = Sequential()
model.add(Dense(1000, input_shape = (train_x.shape[1],), activation='relu' ))
model.add(Dense(1000, activation='relu'))
model.add(Dense(500, activation='relu'))
model.add(Dense(200, activation='relu'))
model.add(Dense(10, activation='softmax'))

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics = ['accuracy'])
model.fit(train_x,train_y, epochs=10, validation_data= (test_x,test_y))

w = model.get_weights()

for i in range(5):
        print(w[i].shape)

现在,当我打印每一层的权重矩阵的尺寸时,我得到以下结果

now, when I print the dimensions of the weight matrix of each layer I get the following result

(784, 1000)
(1000,)
(1000, 1000)
(1000,)
(1000, 500)

第二个为什么有(1000,)而没有(1000,1000)?

why 2nd has (1000,) and not (1000,1000)?

推荐答案

因为它有偏见.不要忘记该图层是由(有时也写为).

Because it is bias. Don't forget that layer is defined by (sometimes also written as ).

假设x的形状为(None, 784),权重w的形状为(784, 1000). matmul(x, w)操作的结果为形状(None, 1000).向此形状的结果张量中,添加形状(1000, )的偏差,该偏差沿None维度传播.

Suppose the shape of x is (None, 784) and the shape of weights w is (784, 1000). The matmul(x, w) operation results in the shape (None, 1000). To the resulted tensor of this shape you're adding bias of shape (1000, ) which is broadcasted along the None dimension.

这篇关于keras model.get_weight没有以预期的尺寸返回结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆