简单TensorFlow计算在不同系统(macOS,Colab,Azure)上无法重现 [英] Simple TensorFlow computation not reproducible on different systems (macOS, Colab, Azure)
问题描述
我正在研究macOS机器,Google Colab以及使用Docker在Azure上的TensorFlow中代码的可重复性.我知道我可以设置一个图形级种子和一个操作级种子.我正在使用急切模式(因此没有并行优化),也没有GPU.我使用单位法线的100x100随机抽取,并计算其均值和标准差.
I am investigating the reproducibility of code in TensorFlow on my macOS machine, on Google Colab, and on Azure with Docker. I understand that I can set a graph-level seed and an operation-level seed. I am using eager mode (so no parallelism optimization) and no GPUs. I use 100x100 random draws from the unit normal and calculate their mean and standard deviation.
下面的测试代码验证我是否未使用GPU,我是否正在使用Tensorflow 1.12.0或TensorFlow 2的预览版,即张量(Float32
)是随机张量的第一个元素(具有如果我仅设置图形级种子或操作级种子,其均值和标准差,则设置为不同的值.我还设置了NumPy的随机种子,尽管我在这里不使用它:
The test code below verifies that I am not using the GPU, that I am using Tensorflow 1.12.0 or the preview of TensorFlow 2, that the tensor if Float32
, the first element of the random tensor (which has a different value if I set only the graph-level seed or also the operation-level seed), their mean and their standard deviation. I also set the random seed of NumPy, although I do not use it here:
import numpy as np
import tensorflow as tf
def tf_1():
"""Returns True if TensorFlow is version 1"""
return tf.__version__.startswith("1.")
def format_number(n):
"""Returns the number string-formatted with 12 number after comma."""
return "%1.12f" % n
def set_top_level_seeds():
"""Sets TensorFlow graph-level seed and Numpy seed."""
if tf_1():
tf.set_random_seed(0)
else:
tf.random.set_seed(0)
np.random.seed(0)
def generate_random_numbers(op_seed=None):
"""Returns random normal draws"""
if op_seed:
t = tf.random.normal([100, 100], seed=op_seed)
else:
t = tf.random.normal([100, 100])
return t
def generate_random_number_stats_str(op_seed=None):
"""Returns mean and standard deviation from random normal draws"""
t = generate_random_numbers(op_seed = op_seed)
mean = tf.reduce_mean(t)
sdev = tf.sqrt(tf.reduce_mean(tf.square(t - mean)))
return [format_number(n) for n in (mean, sdev)]
def generate_random_number_1_seed():
"""Returns a single random number with graph-level seed only."""
set_top_level_seeds()
num = generate_random_numbers()[0, 0]
return num
def generate_random_number_2_seeds():
"""Returns a single random number with graph-level seed only."""
set_top_level_seeds()
num = generate_random_numbers(op_seed=1)[0, 0]
return num
def generate_stats_1_seed():
"""Returns mean and standard deviation wtih graph-level seed only."""
set_top_level_seeds()
return generate_random_number_stats_str()
def generate_stats_2_seeds():
"""Returns mean and standard deviation with graph and operation seeds."""
set_top_level_seeds()
return generate_random_number_stats_str(op_seed=1)
class Tests(tf.test.TestCase):
"""Run tests for reproducibility of TensorFlow."""
def test_gpu(self):
self.assertEqual(False, tf.test.is_gpu_available())
def test_version(self):
self.assertTrue(tf.__version__ == "1.12.0" or
tf.__version__.startswith("2.0.0-dev2019"))
def test_type(self):
num_type = generate_random_number_1_seed().dtype
self.assertEqual(num_type, tf.float32)
def test_eager_execution(self):
self.assertEqual(True, tf.executing_eagerly())
def test_random_number_1_seed(self):
num_str = format_number(generate_random_number_1_seed())
self.assertEqual(num_str, "1.511062622070")
def test_random_number_2_seeds(self):
num_str = format_number(generate_random_number_2_seeds())
self.assertEqual(num_str, "0.680345416069")
def test_arithmetic_1_seed(self):
m, s = generate_stats_1_seed()
if tf_1():
self.assertEqual(m, "-0.008264393546")
self.assertEqual(s, "0.995371103287")
else:
self.assertEqual(m, "-0.008264398202")
self.assertEqual(s, "0.995371103287")
def test_arithmetic_2_seeds(self):
m, s = generate_stats_2_seeds()
if tf_1():
self.assertEqual(m, "0.000620653736")
self.assertEqual(s, "0.997191190720")
else:
self.assertEqual(m, "0.000620646286")
self.assertEqual(s, "0.997191071510")
if __name__ == '__main__':
tf.reset_default_graph()
if tf_1():
tf.enable_eager_execution()
tf.logging.set_verbosity(tf.logging.ERROR)
tf.test.main()
在我的本地计算机上,测试在虚拟环境中通过TensorFlow 1.12.0或TensorFlow 2的预览版通过,在其中我用pip install tensorflow==1.12.0
或pip install tf-nightly-2.0-preview
安装Tensorflow的虚拟环境中.请注意,两个版本中的第一个随机抽取都相同,因此我假定所有随机数都相同,但均值和标准差在小数点后9位不同.因此,TensorFlow在不同版本中实现的计算略有不同.
On my local machine, the tests pass with TensorFlow 1.12.0 or the preview version of TensorFlow 2 in a virtual environment where I installed Tensorflow with pip install tensorflow==1.12.0
or pip install tf-nightly-2.0-preview
. Note that the first random draw is the same in both versions, so I presume that all random numbers are the same, yet the mean and standard deviation are different after 9 decimal places. So TensorFlow implements computations slightly differently in different versions.
在Google Colab上,我将最后一个命令替换为import unittest; unittest.main(argv=['first-arg-is-ignored'], exit=False)
(请参见
On Google Colab, I replace the last command with import unittest; unittest.main(argv=['first-arg-is-ignored'], exit=False)
(see this issue). All tests bar one pass: same random numbers, same mean and standard deviation with graph-level seed. The test that fails is the arithmetic of the mean with both graph-level seed and operation-level seed, with a difference starting at the ninth decimal place:
.F.......
======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-7-16d0afebf95f>", line 109, in test_arithmetic_2_seeds
self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620654086' != '0.000620653736'
- 0.000620654086
? ^^^
+ 0.000620653736
? ^^^
----------------------------------------------------------------------
Ran 9 tests in 0.023s
FAILED (failures=1)
在具有Standard_NV6
机器和NVIDIA GPU Cloud Image
以及以下Dockerfile的Azure上
On Azure with a Standard_NV6
machine with NVIDIA GPU Cloud Image
, and the following Dockerfile
FROM tensorflow/tensorflow:latest-py3
ADD tests.py .
CMD python tests.py
在仅图级别种子以及图级别和操作级别种子的两种情况下,测试均无法通过算术运算:
the tests fail for the arithmetic in both cases of a graph-level seed only and a graph-level and operation-level seed:
FF.......
======================================================================
FAIL: test_arithmetic_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 99, in test_arithmetic_1_seed
self.assertEqual(m, "-0.008264393546")
AssertionError: '-0.008264395408' != '-0.008264393546'
- -0.008264395408
? ^^
+ -0.008264393546
? + ^
======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 109, in test_arithmetic_2_seeds
self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620655250' != '0.000620653736'
- 0.000620655250
+ 0.000620653736
----------------------------------------------------------------------
Ran 9 tests in 0.016s
FAILED (failures=2)
当在Google Colab或Azure上测试失败时,它们的平均值均以相同的实际值持续失败,因此,我认为问题不在于我可以设置的其他随机种子.
When the tests fail on Google Colab or Azure, they fail consistently with the same actual values for the mean, so I believe that the problem is not some other random seed that I could set.
要查看问题是否是在不同系统上实现TensorFlow,我在Azure上使用TensorFlow的其他图像(tensorflow/tensorflow:latest
,不带-py3
标记)以及带有顶级种子的随机数进行测试也不同:
To see if the problem is an implementation of TensorFlow on different systems, I test on Azure with a different image for TensorFlow (tensorflow/tensorflow:latest
, without the -py3
tag), and the random numbers with a top-level seed are also different:
FF..F....
======================================================================
FAIL: test_arithmetic_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 99, in test_arithmetic_1_seed
self.assertEqual(m, "-0.008264393546")
AssertionError: '0.001101632486' != '-0.008264393546'
======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 109, in test_arithmetic_2_seeds
self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620655250' != '0.000620653736'
======================================================================
FAIL: test_random_number_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests.py", line 89, in test_random_number_1_seed
self.assertEqual(num_str, "1.511062622070")
AssertionError: '-1.398459434509' != '1.511062622070'
----------------------------------------------------------------------
Ran 9 tests in 0.015s
如何确保TensorFlow计算在不同系统上的可重复性?
How can I ensure reproducibility of TensorFlow computations on different systems?
推荐答案
浮点计算的精度取决于库的编译选项和系统体系结构的详细信息.
Precision in floating point calculation will depend on the library compilation options and system architecture details.
有很多关于可靠比较浮点数是否相等的困难的文章.搜索浮点数相等"将使它们出现.一个例子是 https://randomascii.wordpress.com /2012/02/25/comparing-floating-point-numbers-2012-edition/
There are quite a few articles written on the difficulties of reliably comparing floating point numbers for equality. A search for 'floating point equality' will turn them up. One example is https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
这篇关于简单TensorFlow计算在不同系统(macOS,Colab,Azure)上无法重现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!