TensorFlow 2.0 中的基本函数最小化和变量跟踪 [英] Basic function minimisation and variable tracking in TensorFlow 2.0

查看:29
本文介绍了TensorFlow 2.0 中的基本函数最小化和变量跟踪的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 TensorFlow 2.0 中执行最基本的函数最小化,正如问题Tensorflow 2.0:最小化一个简单的函数,但是我无法使那里描述的解决方案起作用.这是我的尝试,大部分是复制粘贴的,但添加了一些似乎缺失的部分.

I am trying to perform the most basic function minimisation possible in TensorFlow 2.0, exactly as in the question Tensorflow 2.0: minimize a simple function, however I cannot get the solution described there to work. Here is my attempt, mostly copy-pasted but with some bits that seemed to be missing added in.

import tensorflow as tf

x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32)
with tf.GradientTape() as t:
    y = tf.math.square(x)

# Is the tape that computes the gradients!
trainable_variables = [x]

#### Option 2
# To use minimize you have to define your loss computation as a funcction
def compute_loss():
    y = tf.math.square(x)
    return y

opt = tf.optimizers.Adam(learning_rate=0.001)
train = opt.minimize(compute_loss, var_list=trainable_variables)

print("x:", x)
print("y:", y)

输出:

x: <tf.Variable 'x:0' shape=() dtype=float32, numpy=1.999>
y: tf.Tensor(4.0, shape=(), dtype=float32)

所以它说最小值是 x=1.999,但显然这是错误的.所以发生了什么事?我想它只执行了一个最小化器循环还是什么?如果是这样,那么最小化"对于该函数来说似乎是一个糟糕的名称.这应该如何工作?

So it says the minimum is at x=1.999, but obviously that is wrong. So what happened? I suppose it only performed one loop of the minimiser or something? If so then "minimize" seems like a terrible name for the function. How is this supposed to work?

顺便说一句,我还需要知道损失函数中计算的中间变量的值(示例只有y,但想象一下,计算y 并且我想要所有这些数字).我认为我也没有正确使用梯度磁带,对我来说它与损失函数中的计算有什么关系并不明显(我只是从另一个问题中复制了这个东西).

On a side note, I also need to know the values of intermediate variables that are calculated in the loss function (the example only has y, but imagine that it took several steps to compute y and I want all those numbers). I don't think I am using the gradient tape correctly either, it is not obvious to me that it has anything to do with the computations in the loss function (I just copied this stuff from the other question).

推荐答案

您需要多次调用 minimize,因为 minimize 只执行您的优化的一个步骤.

You need to call minimize multiple times, because minimize only performs a single step of your optimisation.

以下应该有效

import tensorflow as tf

x = tf.Variable(2, name='x', trainable=True, dtype=tf.float32)

# Is the tape that computes the gradients!
trainable_variables = [x]

# To use minimize you have to define your loss computation as a funcction
class Model():
    def __init__(self):
        self.y = 0

    def compute_loss(self):
        self.y = tf.math.square(x)
        return self.y

opt = tf.optimizers.Adam(learning_rate=0.01)
model = Model()
for i in range(1000):
    train = opt.minimize(model.compute_loss, var_list=trainable_variables)

print("x:", x)
print("y:", model.y)

这篇关于TensorFlow 2.0 中的基本函数最小化和变量跟踪的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆