将Keras(Tensorflow)卷积神经网络转换为PyTorch卷积网络? [英] Converting Keras (Tensorflow) convolutional neural networks to PyTorch convolutional networks?

查看:157
本文介绍了将Keras(Tensorflow)卷积神经网络转换为PyTorch卷积网络?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Keras和PyTorch使用不同的填充参数:Keras需要输入字符串,而PyTorch则使用数字.有什么区别,以及如何将其翻译为另一种(在任何一个框架中,什么代码都能得到等效的结果)?

PyTorch还接受args in_channels,out_chanels,而keras仅接受称为filter的参数.过滤器"是什么意思?

解决方案

关于填充,

Keras =>'valid'-无填充;'same'-填充输入,以便输出形状与输入形状相同

Pytorch =>您明确指定了填充

有效填充

 >>>模型= keras.Sequential()>>>model.add(keras.layers.Conv2D(过滤器= 10,kernel_size = 3,padding ='valid',input_shape =(28,28,3)))>>>model.layers [0] .output_shape(没有,26,26,10)>>>x = torch.randn((1,3,28,28))>>>conv = torch.nn.Conv2d(in_channels = 3,out_channels = 10,kernel_size = 3)>>>转换(x).shapetorch.Size([1,10,26,26]) 

相同的填充

 >>>模型= keras.Sequential()>>>model.add(keras.layers.Conv2D(过滤器= 10,kernel_size = 3,padding ='same',input_shape =(28,28,3)))>>>model.layers [0] .output_shape(无,28,28,10)>>>x = torch.randn((1,3,28,28))>>>conv = torch.nn.Conv2d(in_channels = 3,out_channels = 10,kernel_size = 3,padding = 1)>>>转换(x).shapetorch.Size([1,10,28,28]) 

W-输入宽度,F-过滤器(或内核)大小,P-填充,S-步幅,Wout-输出宽度

Wout =((WF + 2P)/S)+1

类似地,用于身高.使用此公式,您可以计算在输出中保留输入宽度或高度所需的填充量.

http://cs231n.github.io/convolutional-networks/

关于in_channel,out_chanels和过滤器

过滤器与out_channels相同.在Keras中,in_channels是从上一层的形状或input_shape(在第一层的情况下)自动推断出来的.

Keras and PyTorch use different arguments for padding: Keras requires a string to be input, while PyTorch works with numbers. What is the difference, and how can one be translated into another (what code gets the equivalent result in either framework)?

PyTorch also takes the args in_channels, out_chanels while keras only takes an argument called filters. What does 'filters' mean?

解决方案

Regarding padding,

Keras => 'valid' - no padding; 'same' - input is padded so that the output shape is same as input shape

Pytorch => you explicitly specify the padding

Valid padding

>>> model = keras.Sequential()
>>> model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='valid', input_shape=(28,28,3)))
>>> model.layers[0].output_shape
(None, 26, 26, 10)

>>> x = torch.randn((1,3,28,28))
>>> conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3)
>>> conv(x).shape
torch.Size([1, 10, 26, 26])

Same padding

>>> model = keras.Sequential()
>>> model.add(keras.layers.Conv2D(filters=10, kernel_size=3, padding='same', input_shape=(28,28,3)))
>>> model.layers[0].output_shape
(None, 28, 28, 10)

>>> x = torch.randn((1,3,28,28))
>>> conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, padding=1)
>>> conv(x).shape
torch.Size([1, 10, 28, 28])

W - Input Width, F - Filter(or kernel) size, P - padding, S - Stride, Wout - Output width

Wout = ((W−F+2P)/S)+1

Similarly for Height. With this formula, you can calculate the amount of padding required to retain the input width or height in the output.

http://cs231n.github.io/convolutional-networks/

Regarding in_channels, out_chanels and filters,

filters is the same as out_channels. In Keras, the in_channels is automatically inferred from the previous layer shape or input_shape(in case of first layer).

这篇关于将Keras(Tensorflow)卷积神经网络转换为PyTorch卷积网络?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆