梯度下降的代码在哪里? [英] Where is the code for gradient descent?
问题描述
使用TensorFlow进行一些实验,想看看一些功能的实现,只是为了确切地了解如何完成某些事情,从tf.train.GradientDescentOptimizer
的简单情况开始.从github下载了完整源代码的zip,对源树进行了一些搜索,然后转到:
Running some experiments with TensorFlow, want to look at the implementation of some functions just to see exactly how some things are done, started with the simple case of tf.train.GradientDescentOptimizer
. Downloaded the zip of the full source code from github, ran some searches over the source tree, got to:
C:\tensorflow-master\tensorflow\python\training\gradient_descent.py
class GradientDescentOptimizer(optimizer.Optimizer):
def _apply_dense(self, grad, var):
return training_ops.apply_gradient_descent(
好吧,大概是实际代码在apply_gradient_descent
中,在其中搜索...不存在.整个源代码树中只有三个出现,所有出现都是用途,而不是定义.
Okay, so presumably the actual code is in apply_gradient_descent
, searched for that... not there. Only three occurrences in the entire source tree, all of which are uses, not definitions.
training_ops
呢?确实存在带有提示名称的源文件:
What about training_ops
? There does exist a source file with a suggestive name:
C:\tensorflow-master\tensorflow\python\training\training_ops.py
from tensorflow.python.training import gen_training_ops
# go/tf-wildcard-import
# pylint: disable=wildcard-import
from tensorflow.python.training.gen_training_ops import *
# pylint: enable=wildcard-import
...以上是该文件的全部内容.嗯.
... the above is the entire content of that file. Hmm.
我确实找到了此文件:
C:\tensorflow-master\tensorflow\python\BUILD
tf_gen_op_wrapper_private_py(
name = "training_ops_gen",
out = "training/gen_training_ops.py",
)
这似乎可以确认此类文件以及其他文件是在构建过程中生成的目标代码-但是它们是从哪里生成的源代码?
which seems to confirm such and such other files are object code, generated in the build process - but where is the source code they are generated from?
所以这是我放弃寻求帮助的时候.熟悉TensorFlow代码库的任何人都可以将我指向相关源代码的地方吗?
So this is the point at which I give up and ask for help. Can anyone familiar with the TensorFlow code base point me to where the relevant source code is?
推荐答案
该实现进一步涉及本机c ++代码.这是 ApplyGradientDescent
GPU实施(core/kernels/training_ops_gpu.cu.cc
):
The implementation further goes to the native c++ code. Here's ApplyGradientDescent
GPU implementation (core/kernels/training_ops_gpu.cu.cc
):
template <typename T>
struct ApplyGradientDescent<GPUDevice, T> {
void operator()(const GPUDevice& d, typename TTypes<T>::Flat var,
typename TTypes<T>::ConstScalar lr,
typename TTypes<T>::ConstFlat grad) {
Eigen::array<typename TTypes<T>::Tensor::Index, 1> bcast;
bcast[0] = grad.dimension(0);
Eigen::Sizes<1> single;
var.device(d) -= lr.reshape(single).broadcast(bcast) * grad;
}
};
CPU实现在此处(core/kernels/training_ops.cc
):
template <typename T>
struct ApplyGradientDescent<CPUDevice, T> {
void operator()(const CPUDevice& d, typename TTypes<T>::Flat var,
typename TTypes<T>::ConstScalar lr,
typename TTypes<T>::ConstFlat grad) {
var.device(d) -= grad * lr();
}
};
这篇关于梯度下降的代码在哪里?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!