我在哪里可以看一下TensorFlow梯度下降主回路? [英] Where can I have a look at TensorFlow gradient descent main loop?

查看:92
本文介绍了我在哪里可以看一下TensorFlow梯度下降主回路?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

(对不起,如果听起来有点天真) 我想看看 TensorFlow GradientDescent的实现-并亲自了解它们如何处理终止条件,步长自适应等.我跟踪了training_ops.apply_gradient_descent的代码,但找不到实现:(

(Sorry if this sounds a bit naive) I want to have a look at the meat of the TensorFlow implementation for GradientDescent - and see for myself how are they handling termination condition, step-size adaptiveness, etc. I traced the code down for training_ops.apply_gradient_descent but I can't find the implementation :(

推荐答案

TensorFlow Optimizer接口(由实现)定义了一个最小化步骤.终止条件或调整步长由用户实现.在 MNIST对于初学者教程中,终止条件是在1000后停止"的步骤,您可以在for i in range(1000)循环中看到

TensorFlow Optimizer interface, (which GradientDescentOptimizer implements) defines a a single step of minimization. Termination conditions or adjusting step size is implemented by the user. In MNIST for Beginners tutorial, the termination conditions is "stop after 1000" steps which you can see in for i in range(1000) loop

apply_gradient_descent(a,b,c)是一个融合运算符,它将c乘以b并将其添加到a. 添加新的op HowTo ,但是作为快捷方式,通常可以通过从蛇形格式转换并搜索来找到C ++实现,因此在本例中为ApplyGradientDescent.这导致在 tensorflow/core/kernels/中实现training_ops.cc

apply_gradient_descent(a,b,c) is a fused op that multiplies c by b and adds it to a. There are some extra levels of indirection to go from Python wrapper to C++ implementation detailed in Adding a new op HowTo, but as a shortcut you can usually find C++ implementation by converting from snake-case and searching for that, so ApplyGradientDescent in this case. That leads to implementation in tensorflow/core/kernels/training_ops.cc

这篇关于我在哪里可以看一下TensorFlow梯度下降主回路?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆