无法在Keras中复制Matconvnet CNN架构 [英] Can't replicate a matconvnet CNN architecture in Keras

查看:93
本文介绍了无法在Keras中复制Matconvnet CNN架构的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在matconvnet中具有以下卷积神经网络架构,可用于训练自己的数据:

I have the following architecture of a Convolutional Neural Network in matconvnet which I use to train on my own data:

function net = cnn_mnist_init(varargin)
% CNN_MNIST_LENET Initialize a CNN similar for MNIST
opts.batchNormalization = false ;
opts.networkType = 'simplenn' ;
opts = vl_argparse(opts, varargin) ;

f= 0.0125 ;
net.layers = {} ;
net.layers{end+1} = struct('name','conv1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,1,64, 'single'), zeros(1, 64, 'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool1',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0);
net.layers{end+1} = struct('name','conv2',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,64,128, 'single'),zeros(1,128,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool2',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv3',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,128,256, 'single'),zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool3',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv4',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,256,512, 'single'),zeros(1,512,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool4',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','ip1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,256, 'single'),  zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','relu',...
                           'type', 'relu');
net.layers{end+1} = struct('name','classifier',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,2, 'single'), zeros(1,2,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','loss',...
                           'type', 'softmaxloss') ;

% optionally switch to batch normalization
if opts.batchNormalization
  net = insertBnorm(net, 1) ;
  net = insertBnorm(net, 4) ;
  net = insertBnorm(net, 7) ;
  net = insertBnorm(net, 10) ;
  net = insertBnorm(net, 13) ;
end

% Meta parameters
net.meta.inputSize = [28 28 1] ;
net.meta.trainOpts.learningRate = [0.01*ones(1,10) 0.001*ones(1,10) 0.0001*ones(1,10)];
disp(net.meta.trainOpts.learningRate);
pause;
net.meta.trainOpts.numEpochs = length(net.meta.trainOpts.learningRate) ;
net.meta.trainOpts.batchSize = 256 ;
net.meta.trainOpts.momentum = 0.9 ;
net.meta.trainOpts.weightDecay = 0.0005 ;

% --------------------------------------------------------------------
function net = insertBnorm(net, l)
% --------------------------------------------------------------------
assert(isfield(net.layers{l}, 'weights'));
ndim = size(net.layers{l}.weights{1}, 4);
layer = struct('type', 'bnorm', ...
               'weights', {{ones(ndim, 1, 'single'), zeros(ndim, 1, 'single')}}, ...
               'learningRate', [1 1], ...
               'weightDecay', [0 0]) ;
net.layers{l}.biases = [] ;
net.layers = horzcat(net.layers(1:l), layer, net.layers(l+1:end)) ;

我想做的是在Keras中构建相同的体系结构,这就是我到目前为止所尝试的:

What I want to do is building the same architecture in Keras, that is what I tried so far:

model = Sequential()

model.add(Conv2D(64, (3, 3), strides=1, input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(128, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(256, (3, 3), strides=1))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(512, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=1))

model.add(Conv2D(256, (1, 1)))
convout1=Activation('relu')
model.add(convout1)

model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

opt = keras.optimizers.rmsprop(lr=0.0001, decay=0.0005)  
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['binary_accuracy'])

但是,当我运行matconvnet网络时,我的准确性为87%,如果运行keras版本,我的准确性为77%.如果假定它们是相同的网络,并且数据相同,则区别在哪里?我的Keras架构有什么问题?

However, when I run the matconvnet network I have 87% accuracy and if I run the keras version I have 77% accuracy. If they are supposed to be the same network and the data is the same, where is the difference? What is wrong in my Keras architecture?

推荐答案

在您的MatConvNet版本中,您使用SGD的动力很大.

In your MatConvNet version, you use SGD with momentum.

在Keras中,您使用rmsprop

In Keras, you use rmsprop

使用不同的学习规则,您应该尝试不同的学习率.在训练CNN时,有时动量也有帮助.

With a different learning rule you should try different learning rates. Also sometimes momentum is helpful when training a CNN.

您能在Keras尝试SGD +动量,让我知道会发生什么吗?

Could you try the SGD+momentum in Keras and let me know what happens?

另一件事可能与众不同的是初始化.例如,在MatConvNet中,您使用高斯初始化,其中f = 0.0125作为标准偏差.在Keras中,我不确定默认的初始化.

Another thing that might be different is that the initialization. for example in MatConvNet you use gaussian initialization with f= 0.0125 as the standard deviation. In Keras I'm not sure about the default initialization.

通常,如果不使用批量归一化,则网络很容易出现许多数值问题.如果您在两个网络中都使用批量归一化,我敢打赌结果会是相似的.您有什么理由不想使用批量归一化吗?

In general, if you don't use batch normalization, the network is prone to many numerical issues. If you use batch normalization in both networks, I bet the results would be similar. Is there any reason you don't want to use batch normalization?

这篇关于无法在Keras中复制Matconvnet CNN架构的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆