使用 Tensorflow 将学习率添加到 fit_generator 的历史对象 [英] Add learning rate to history object of fit_generator with Tensorflow

查看:126
本文介绍了使用 Tensorflow 将学习率添加到 fit_generator 的历史对象的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想检查我的优化器如何改变我的学习率.我正在使用 tensorflow 1.15.我使用 fit_generator 运行我的模型:

I want to check how my optimizer is changing my learning rate. I am using tensorflow 1.15. I run my model with fit_generator:

hist = model.fit_generator(dat, args.onthefly[0]//args.batch, args.epochs,
                                   validation_data=val, validation_steps=args.onthefly[1]//args.batch,verbose=2,
                                   use_multiprocessing=True, workers=56)

我使用编译函数选择优化器:

I choose the optimizer using the compile function:

model.compile(loss=loss,
                  optimizer=Nadam(lr=learning_rate),
                  metrics=['binary_accuracy']
                 )

如何获得每个 epoch 结束时的学习率值?

How can I get the value of the learning rate at the end of each epoch?

推荐答案

您可以使用 model.fit_generatorcallbacks 参数来实现.下面是关于如何实现它的代码.在这里,我使用 tf.keras.callbacks.LearningRateScheduler 将每个时期的学习率增加 0.01,并在每个时期结束时使用 tf.keras.callbacks.Callback 显示它.

You can do that using callbacks argument of model.fit_generator. Below is the code on how to implement it. Here I am incrementing learning rate by 0.01 for every epoch using tf.keras.callbacks.LearningRateScheduler and also displaying it at end of every epoch using tf.keras.callbacks.Callback.

完整代码 -

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import backend as K

import os
import numpy as np
import matplotlib.pyplot as plt

_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')

train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')

train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val

batch_size = 128
epochs = 5
IMG_HEIGHT = 150
IMG_WIDTH = 150

train_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our validation data

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
                                                           directory=train_dir,
                                                           shuffle=True,
                                                           target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                           class_mode='binary')

val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
                                                              directory=validation_dir,
                                                              target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                              class_mode='binary')

model = Sequential([
    Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Flatten(),
    Dense(512, activation='relu'),
    Dense(1)
])

lr = 0.01
adam = Adam(lr)

# Define the Required Callback Function
class printlearningrate(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        optimizer = self.model.optimizer
        lr = K.eval(optimizer.lr)
        Epoch_count = epoch + 1
        print('\n', "Epoch:", Epoch_count, ', LR: {:.2f}'.format(lr))

printlr = printlearningrate() 

def scheduler(epoch):
  optimizer = model.optimizer
  return K.eval(optimizer.lr + 0.01)

updatelr = tf.keras.callbacks.LearningRateScheduler(scheduler)

model.compile(optimizer=adam, 
          loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
          metrics=['accuracy'])

history = model.fit_generator(
          train_data_gen,
          steps_per_epoch=total_train // batch_size,
          epochs=epochs,
          validation_data=val_data_gen,
          validation_steps=total_val // batch_size,
          callbacks = [printlr,updatelr])

输出 -

Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/5
15/15 [==============================] - ETA: 0s - loss: 40.9353 - accuracy: 0.5156
 Epoch: 1 , LR: 0.02
15/15 [==============================] - 27s 2s/step - loss: 40.9353 - accuracy: 0.5156 - val_loss: 0.6938 - val_accuracy: 0.5067 - lr: 0.0200
Epoch 2/5
15/15 [==============================] - ETA: 0s - loss: 0.6933 - accuracy: 0.5021
 Epoch: 2 , LR: 0.03
15/15 [==============================] - 27s 2s/step - loss: 0.6933 - accuracy: 0.5021 - val_loss: 0.6935 - val_accuracy: 0.4877 - lr: 0.0300
Epoch 3/5
15/15 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.4989
 Epoch: 3 , LR: 0.04
15/15 [==============================] - 27s 2s/step - loss: 0.6932 - accuracy: 0.4989 - val_loss: 0.6933 - val_accuracy: 0.5056 - lr: 0.0400
Epoch 4/5
15/15 [==============================] - ETA: 0s - loss: 0.6932 - accuracy: 0.4947
 Epoch: 4 , LR: 0.05
15/15 [==============================] - 27s 2s/step - loss: 0.6932 - accuracy: 0.4947 - val_loss: 0.6931 - val_accuracy: 0.4967 - lr: 0.0500
Epoch 5/5
15/15 [==============================] - ETA: 0s - loss: 0.6935 - accuracy: 0.5091
 Epoch: 5 , LR: 0.06
15/15 [==============================] - 27s 2s/step - loss: 0.6935 - accuracy: 0.5091 - val_loss: 0.6935 - val_accuracy: 0.4978 - lr: 0.0600

这篇关于使用 Tensorflow 将学习率添加到 fit_generator 的历史对象的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆