使用自定义渐变进行自定义激活不起作用 [英] Custom Activation with custom gradient does not work

查看:52
本文介绍了使用自定义渐变进行自定义激活不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试为简单的神经网络培训编写代码.目标是定义一个自定义激活函数,而不是让Keras自动将其派生用于反向传播,而是让Keras使用我的自定义梯度函数进行自定义激活:

I am trying to write a code for a simple neural network training. The goal is to define a custom activation function and instead of letting Keras take the derivative of it automatically for the backpropagation, I make Keras use my custom gradient function for my custom activation:

import numpy as np
import tensorflow as tf
import math
import keras
from keras.models import Model, Sequential
from keras.layers import Input, Dense, Activation
from keras import regularizers
from keras import backend as K
from keras.backend import tf
from keras import initializers
from keras.layers import Lambda

@tf.custom_gradient
def custom_activation(x):

    def grad(dy):
        return dy * 0

    result=(K.sigmoid(x) *2-1 )
    return result, grad 

x_train=np.array([[1,2],[3,4],[3,4]]);

inputs = Input(shape=(2,))
output_1 = Dense(20, kernel_initializer='glorot_normal')(inputs)
layer = Lambda(lambda x: custom_activation)(output_1)
output_2 = Dense(2, activation='linear',kernel_initializer='glorot_normal')(layer)
model2 = Model(inputs=inputs, outputs=output_2)

model2.compile(optimizer='adam',loss='mean_squared_error')
model2.fit(x_train,x_train,epochs=20,validation_split=0.1,shuffle=False)

由于将梯度定义为零,所以我预计损耗在所有时期都不会改变.这是我得到的错误的回溯记录:

Since the gradient has been defined to be zero, I expect that the loss does not change after all epochs. Here is the backtrace of the error I get:

Using TensorFlow backend.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
  File "C:/p/CE/mytest.py", line 43, in <module>
    layer = Lambda(lambda x: custom_activation)(output_1)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\base_layer.py", line 474, in __call__
    output_shape = self.compute_output_shape(input_shape)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\layers\core.py", line 656, in compute_output_shape
    return K.int_shape(x)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 593, in int_shape
    return tuple(x.get_shape().as_list())
AttributeError: 'function' object has no attribute 'get_shape'

更新:我使用了Manoj Mohan的答案,现在代码可以正常工作了.由于梯度被定义为零,因此我希望在各个纪元之间看到不变的损失.但是,它的确发生了变化.为什么?我想念什么吗?

Update: I used Manoj Mohan's answer and now the code works. I expect to see unchanged loss among epochs since the gradient is defined to be zero. But, it does change. Why? Am I missing anything?

示例:

Epoch 1/20
2019-10-03 10:31:34.193232: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2

2/2 [==============================] - 0s 68ms/step - loss: 8.3184 - val_loss: 13.7232
Epoch 2/20

2/2 [==============================] - 0s 496us/step - loss: 8.2783 - val_loss: 13.6368

推荐答案

替换

layer = Lambda(lambda x: custom_activation)(output_1)

使用

layer = Lambda(custom_activation)(output_1)

由于梯度是 定义为零.但是,它的确发生了变化.为什么?

I expect to see unchanged loss among epochs since the gradient is defined to be zero. But, it does change. Why?

在中间层中,梯度更新为零.因此,梯度不会从那里倒流.但是从输出到中间层,梯度将流动并且权重将被更新.这种经过修改的体系结构将在各个时期输出恒定的损耗.

The gradient was updated to zero in an intermediate layer. So, the gradients will not flow backwards from there. But from the output till the intermediate layer, gradient will flow and weights will get updated. This modified architecture, will output constant loss across epochs.

inputs = Input(shape=(2,))
output_1 = Dense(20, kernel_initializer='glorot_normal')(inputs)
output_2 = Dense(2, activation='linear',kernel_initializer='glorot_normal')(output_1)
layer = Lambda(custom_activation)(output_2)  #should be last layer
model2 = Model(inputs=inputs, outputs=layer) 

这篇关于使用自定义渐变进行自定义激活不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆