Tensorflow.Keras Adam优化器实例化 [英] Tensorflow.Keras Adam Optimizer Instantiation

查看:131
本文介绍了Tensorflow.Keras Adam优化器实例化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在训练CNN时,我发现损失减少得更快当优化程序设置为 optimizer实例

While training CNN, I found that loss is reduced faster when the optimizer is set as optimizer instance

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), 
       ....... 

然后将优化器设置为 String(优化器名称)

`model.compile(optimizer='adam', .......)

由于默认的 learning_rate = 0.001 ,为什么它们的工作方式不同,它们之间有什么区别?

Since default learning_rate=0.001, why they are working differently, what is the difference between them?

推荐答案

从技术上讲,应该没有任何区别.如果遵循源文件adam 字符串参数"rel =" nofollow noreferrer>在这里,您会看到

Technically there should not be any differences. If you follow the adam string parameter in the source file here, you would see

all_classes = {
     'adam': adam_v2.Adam,
      ...
}

adam_v2.Adam 与位于三个不同位置(即

  1. 源代码 <代码> tensorflow/python/tpu/tpu_embedding_v2_utils.py

- @tf_export("tpu.experimental.embedding.Adam")
class Adam(_Optimizer):

  1. 源代码 tensorflow/python/keras/optimizer_v1.py

class Adam(Optimizer):

  1. 源代码 tensorflow/python/keras/optimizer_v2/adam.py

@keras_export('keras.optimizers.Adam')
class Adam(optimizer_v2.OptimizerV2):

现在,检查 tf.keras.optimizers.Adam ,单击在GitHub上查看源代码按钮,您将重定向到

Now, check the source code of tf.keras.optimizers.Adam, click view source on GitHub button, you would redirect to here - which is number 3 from above.

这篇关于Tensorflow.Keras Adam优化器实例化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆