如何在原型文件中的caffe中使用空间金字塔层? [英] How to use the Spatial Pyramid Layer in caffe in proto files?

查看:62
本文介绍了如何在原型文件中的caffe中使用空间金字塔层?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您好,我想知道如何在其中使用 SPP层原始文件. 也许有人可以向我解释如何阅读Caffe文档,因为有时我很难直接理解它.

Hi I would like to know how to use the SPP Layer in a proto file. Maybe someone could explain to me how to read the caffe docs, as it is sometimes hard for me to understand it directly.

我的尝试基于此

My attempt is based on this protofile, but I think it differs from the current version?

我这样定义图层:

layers {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv2"
  top: "spatial_pyramid_pooling"
  spatial_pyramid_pooling_param {
    pool: MAX
    spatial_bin: 1
    spatial_bin: 2
    spatial_bin: 3
    spatial_bin: 6
    scale: 1
  }
}


当我尝试开始学习时,出现以下错误消息:

[libprotobuf ERROR google/protobuf/text_format.cc:287] Error parsing text-format caffe.NetParameter: 137:9: Expected integer or identifier, got: "SPP"
F0714 13:25:38.782958 2061316096 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file:


完整的原始文件(具有批处理批量标准化和SPP的Lenet):

name: "TessDigitMean"
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_batchnormalizedV2AndSPP/1/caffe/train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_batchnormalizedV2AndSPP/1/caffe/test_lmdb"
    batch_size: 10
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "bn1"
  type: "BatchNorm"
  bottom: "pool1"
  top: "bn1"
  batch_norm_param {
    use_global_stats: false
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TRAIN
  }
}
layer {
  name: "bn1"
  type: "BatchNorm"
  bottom: "pool1"
  top: "bn1"
  batch_norm_param {
    use_global_stats: true
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TEST
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "bn1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layers {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv2"
  top: "spatial_pyramid_pooling"
  spatial_pyramid_pooling_param {
    pool: MAX
    spatial_bin: 1
    spatial_bin: 2
    spatial_bin: 3
    spatial_bin: 6
    scale: 1
  }
}
layer {
  name: "bn2"
  type: "BatchNorm"
  bottom: "spatial_pyramid_pooling"
  top: "bn2"
  batch_norm_param {
    use_global_stats: false
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TRAIN
  }
}
layer {
  name: "bn2"
  type: "BatchNorm"
  bottom: "pool2"
  top: "bn2"
  batch_norm_param {
    use_global_stats: true
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TEST
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "bn2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

推荐答案

好,我找到了.

定义SPP图层的正确方法如下:

layer {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv2"
  top: "pool2"
  spp_param {
    pyramid_height: 2
  }
} 

请注意,我以前写的是layers而不是layer. 此外,您可以在spp_param{}中为此层指定参数. Caffe的官方版本没有垃圾箱可供选择,而是金字塔高度.因此,我第一次尝试使用的版本不正确.

Note that I had previously written layers instead of layer. Furthermore you can specify parameters for this layer inside spp_param{}. The official version of caffe does not have bins as an option, but instead a pyramid height. So the version my first try was based on, is incorrect.

为我自己和任何不熟悉Caffe并且对文档风格有些困惑的人提供了一些注意事项.

Some notes for myself and anyone who is new to caffe and is a bit confused by the style of the docs.

文档:

  • 图层类型:SPP
  • Layer type: SPP

...

message SPPParameter {
  enum PoolMethod {
    MAX = 0;
    AVE = 1;
    STOCHASTIC = 2;
  }
  optional uint32 pyramid_height = 1;
  optional PoolMethod pool = 2 [default = MAX]; // The pooling method
  enum Engine {
    DEFAULT = 0;
    CAFFE = 1;
    CUDNN = 2;
  }
  optional Engine engine = 6 [default = DEFAULT];
}

注释:

  • 图层类型定义了关键字,以声明原始文件中的图层类型(如果知道,则为逻辑类型)

  • Layer type defines the key word to declare the type of a layer in proto file (kind of logical if you know it)

Enumsparameter的可能值.

不能在与类型或名称相同的级别上定义参数.相反,您必须将其包装在layerspecifc参数关键字(spp_param)中. 像<layertype>_param{}这样的关键字以小写字母形式构建.

Parameters can not be defined on the same level as type or name. Instead you have to wrap it inside an layerspecifc parameter keyword (spp_param). This keyword is build like this <layertype>_param{} in lowercase letters.

这篇关于如何在原型文件中的caffe中使用空间金字塔层?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆