如何抑制详细的 Tensorflow 日志记录? [英] How to suppress verbose Tensorflow logging?

查看:23
本文介绍了如何抑制详细的 Tensorflow 日志记录?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在用鼻子测试对我的 Tensorflow 代码进行单元测试,但它产生了如此多的冗长输出,使其毫无用处.

I'm unittesting my Tensorflow code with nosetests but it produces such amount of verbose output that makes it useless.

下面的测试

import unittest
import tensorflow as tf

class MyTest(unittest.TestCase):

    def test_creation(self):
        self.assertEquals(True, False)

当使用 nosetests 运行时会产生大量无用的日志:

when run with nosetests creates a huge amount of useless logging:

FAIL: test_creation (tests.test_tf.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/cebrian/GIT/thesis-nilm/code/deepmodels/tests/test_tf.py", line 10, in test_creation
    self.assertEquals(True, False)
AssertionError: True != False
-------------------- >> begin captured logging << --------------------
tensorflow: Level 1: Registering Const (<function _ConstantShape at 0x7f4379131c80>) in shape functions.
tensorflow: Level 1: Registering Assert (<function no_outputs at 0x7f43791319b0>) in shape functions.
tensorflow: Level 1: Registering Print (<function _PrintGrad at 0x7f4378effd70>) in gradient.
tensorflow: Level 1: Registering Print (<function unchanged_shape at 0x7f4379131320>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (None) in gradient.
tensorflow: Level 1: Registering HistogramSummary (None) in gradient.
tensorflow: Level 1: Registering ImageSummary (None) in gradient.
tensorflow: Level 1: Registering AudioSummary (None) in gradient.
tensorflow: Level 1: Registering MergeSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (None) in gradient.
tensorflow: Level 1: Registering ScalarSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering MergeSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering AudioSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering ImageSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering HistogramAccumulatorSummary (<function _ScalarShape at 0x7f4378f042a8>) in shape functions.
tensorflow: Level 1: Registering Pack (<function _PackShape at 0x7f4378f047d0>) in shape functions.
tensorflow: Level 1: Registering Unpack (<function _UnpackShape at 0x7f4378f048c0>) in shape functions.
tensorflow: Level 1: Registering Concat (<function _ConcatShape at 0x7f4378f04938>) in shape functions.
tensorflow: Level 1: Registering ConcatOffset (<function _ConcatOffsetShape at 0x7f4378f049b0>) in shape functions.

......

而从 ipython 控制台使用 tensorflow 似乎并不那么冗长:

whereas using tensorflow from the ipython console doesn't seem that verbose:

$ ipython
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
Type "copyright", "credits" or "license" for more information.

IPython 4.2.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally

In [2]:

在运行鼻子测试时如何抑制以前的日志记录?

How could I suppress the former logging when running nosetests?

推荐答案

2.0 更新 (10/8/19)设置 TF_CPP_MIN_LOG_LEVEL 应该仍然有效(参见 v0.12+ 更新中的下面),但目前有一个问题未解决(参见 问题 #31870).如果设置 TF_CPP_MIN_LOG_LEVEL 对您不起作用(再次参见下文),请尝试执行以下操作来设置日志级别:

2.0 Update (10/8/19) Setting TF_CPP_MIN_LOG_LEVEL should still work (see below in v0.12+ update), but there is currently an issue open (see issue #31870). If setting TF_CPP_MIN_LOG_LEVEL does not work for you (again, see below), try doing the following to set the log level:

import tensorflow as tf
tf.get_logger().setLevel('INFO')

此外,请参阅 tf.autograph.set_verbosity 上的文档 设置签名日志消息的详细程度 - 例如:

In addition, please see the documentation on tf.autograph.set_verbosity which sets the verbosity of autograph log messages - for example:

# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)

v0.12+ 更新 (5/20/17),通过 TF 2.0+:

在 TensorFlow 0.12+ 中,根据此问题,您现在可以通过名为TF_CPP_MIN_LOG_LEVEL的环境变量;它默认为 0(显示所有日志),但可以在 Level 列下设置为以下值之一.

In TensorFlow 0.12+, per this issue, you can now control logging via the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown) but can be set to one of the following values under the Level column.

  Level | Level for Humans | Level Description                  
 -------|------------------|------------------------------------ 
  0     | DEBUG            | [Default] Print all messages       
  1     | INFO             | Filter out INFO messages           
  2     | WARNING          | Filter out INFO & WARNING messages 
  3     | ERROR            | Filter out all messages      

请参阅以下使用 Python 的通用操作系统示例:

See the following generic OS example using Python:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}
import tensorflow as tf

为了彻底,您还调用了 Python 的设置级别 tf_logging 模块,用于 eg摘要操作、张量板、各种估计器等.

To be thorough, you call also set the level for the Python tf_logging module, which is used in e.g. summary ops, tensorboard, various estimators, etc.

# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}

对于 1.14,如果您不更改为使用 v1 API,您将收到警告,如下所示:

For 1.14 you will receive warnings if you do not change to use the v1 API as follows:

# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}

<小时>对于之前版本的 TensorFlow 或 TF-Learn Logging(v0.11.x 或更低版本):

查看以下页面以获取有关 TensorFlow 日志记录的信息;使用新的更新,您可以将日志详细程度设置为 DEBUGINFOWARNERROR> 或 FATAL.例如:

View the page below for information on TensorFlow logging; with the new update, you're able to set the logging verbosity to either DEBUG, INFO, WARN, ERROR, or FATAL. For example:

tf.logging.set_verbosity(tf.logging.ERROR)

该页面还介绍了可与 TF-Learn 模型一起使用的监视器.这是页面.

The page additionally goes over monitors which can be used with TF-Learn models. Here is the page.

不会阻止所有日志记录,但是(仅 TF-Learn).我有两个解决方案;一个是技术上正确"的解决方案 (Linux),另一个涉及重建 TensorFlow.

This doesn't block all logging, though (only TF-Learn). I have two solutions; one is a 'technically correct' solution (Linux) and the other involves rebuilding TensorFlow.

script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'

其他请看这个答案 涉及修改源代码和重建 TensorFlow.

For the other, please see this answer which involves modifying source and rebuilding TensorFlow.

这篇关于如何抑制详细的 Tensorflow 日志记录?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆