如何在Tensorflow 2.0中应用Guided BackProp? [英] How to apply Guided BackProp in Tensorflow 2.0?
问题描述
我从Tensorflow 2.0
开始,尝试实现Guided BackProp来显示Saliency Map.我首先计算图像的y_pred
和y_true
之间的损耗,然后查找由于该损耗而导致的所有图层的梯度.
I am starting with Tensorflow 2.0
and trying to implement Guided BackProp to display Saliency Map. I started by computing the loss between y_pred
and y_true
of an image, then find gradients of all layers due to this loss.
with tf.GradientTape() as tape:
logits = model(tf.cast(image_batch_val, dtype=tf.float32))
print('`logits` has type {0}'.format(type(logits)))
xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=tf.cast(tf.one_hot(1-label_batch_val, depth=2), dtype=tf.int32), logits=logits)
reduced = tf.reduce_mean(xentropy)
grads = tape.gradient(reduced, model.trainable_variables)
但是,我不知道如何使用渐变来获得引导传播.
However, I don't know what to do with gradients in order to obtain the Guided Propagation.
这是我的模特.我使用Keras图层创建了它:
This is my model. I created it using Keras layers:
image_input = Input((input_size, input_size, 3))
conv_0 = Conv2D(32, (3, 3), padding='SAME')(image_input)
conv_0_bn = BatchNormalization()(conv_0)
conv_0_act = Activation('relu')(conv_0_bn)
conv_0_pool = MaxPool2D((2, 2))(conv_0_act)
conv_1 = Conv2D(64, (3, 3), padding='SAME')(conv_0_pool)
conv_1_bn = BatchNormalization()(conv_1)
conv_1_act = Activation('relu')(conv_1_bn)
conv_1_pool = MaxPool2D((2, 2))(conv_1_act)
conv_2 = Conv2D(64, (3, 3), padding='SAME')(conv_1_pool)
conv_2_bn = BatchNormalization()(conv_2)
conv_2_act = Activation('relu')(conv_2_bn)
conv_2_pool = MaxPool2D((2, 2))(conv_2_act)
conv_3 = Conv2D(128, (3, 3), padding='SAME')(conv_2_pool)
conv_3_bn = BatchNormalization()(conv_3)
conv_3_act = Activation('relu')(conv_3_bn)
conv_4 = Conv2D(128, (3, 3), padding='SAME')(conv_3_act)
conv_4_bn = BatchNormalization()(conv_4)
conv_4_act = Activation('relu')(conv_4_bn)
conv_4_pool = MaxPool2D((2, 2))(conv_4_act)
conv_5 = Conv2D(128, (3, 3), padding='SAME')(conv_4_pool)
conv_5_bn = BatchNormalization()(conv_5)
conv_5_act = Activation('relu')(conv_5_bn)
conv_6 = Conv2D(128, (3, 3), padding='SAME')(conv_5_act)
conv_6_bn = BatchNormalization()(conv_6)
conv_6_act = Activation('relu')(conv_6_bn)
flat = Flatten()(conv_6_act)
fc_0 = Dense(64, activation='relu')(flat)
fc_0_bn = BatchNormalization()(fc_0)
fc_1 = Dense(32, activation='relu')(fc_0_bn)
fc_1_drop = Dropout(0.5)(fc_1)
output = Dense(2, activation='softmax')(fc_1_drop)
model = models.Model(inputs=image_input, outputs=output)
很高兴在需要时提供更多代码.
I am glad to provide more code if needed.
推荐答案
首先,您必须通过ReLU更改梯度的计算,即
First of all, you have to change the computation of the gradient through a ReLU, i.e.
以下是
可以使用以下代码来实现此公式:
This formula can be implemented with the following code:
@tf.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
gate_f = tf.cast(op.outputs[0] > 0, "float32") #for f^l > 0
gate_R = tf.cast(grad > 0, "float32") #for R^l+1 > 0
return gate_f * gate_R * grad
现在,您必须使用以下方法覆盖ReLU的原始TF实现:
Now you have to override the original TF implementation of ReLU with:
with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):
#put here the code for computing the gradient
计算梯度后,您可以可视化结果. 但是,最后一句话.您可以为单个类计算可视化效果.这意味着,您将激活所选神经元,并将其他神经元的所有激活都设置为零,以用于Guided BackProp的输入.
After computing the gradient, you can visualize the result. However, one last remark. You compute a visualization for a single class. This means, you take the activation of a choosen neuron and set all the activations of the other neurons to zero for the input of Guided BackProp.
这篇关于如何在Tensorflow 2.0中应用Guided BackProp?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!