如何在 TensorFlow 中使用线性激活函数? [英] How to use a linear activation function in TensorFlow?
问题描述
在CUDA ConvNet中,我们可以通过写neuron=linear[a,b]
来指定神经元激活函数为线性,使得f(x) = ax + b代码>.
In CUDA ConvNet, we can specify the neuron activation function to be linear by writing neuron=linear[a,b]
, such that f(x) = ax + b
.
如何在 TensorFlow 中获得相同的结果?
How can I achieve the same result in TensorFlow?
推荐答案
在 TensorFlow 中编写线性激活的最基本方法是使用 tf.matmul()
和 tf.add()
(或 +
运算符).假设您有一个来自前一层的输出矩阵(我们称之为 prev_layer
),其大小为 batch_size
x prev_units
,以及线性的大小层是linear_units
:
The most basic way to write a linear activation in TensorFlow is using tf.matmul()
and tf.add()
(or the +
operator). Assuming you have a matrix of outputs from the previous layer (let's call it prev_layer
) with size batch_size
x prev_units
, and the size of the linear layer is linear_units
:
prev_layer = …
linear_W = tf.Variable(tf.truncated_normal([prev_units, linear_units], …))
linear_b = tf.Variable(tf.zeros([linear_units]))
linear_layer = tf.matmul(prev_layer, linear_W) + linear_b
这篇关于如何在 TensorFlow 中使用线性激活函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!