记录由Tensorflow服务模型提供服务的请求 [英] Logging requests being served by tensorflow serving model

查看:200
本文介绍了记录由Tensorflow服务模型提供服务的请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用tesnorflow服务构建了一个模型,并且还使用以下命令在服务器上运行了该模型:-

I have built a model using tesnorflow serving and also ran it on server using this command:-

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow-models-repository/ETA

但是现在此屏幕停滞了,没有提供有关传入请求和响应的任何信息. 我尝试使用TF_CPP_MIN_VLOG_LEVEL = 1标志.但是现在它提供了如此多的输出,并且仍然没有对传入的请求/响应进行日志记录/监视.

But now this screen is stagnant, not giving any info about incoming requests and resonses. I tried to use TF_CPP_MIN_VLOG_LEVEL=1 flag. But now it is giving so much output and still no logging/monitoring about incoming requests/responses.

请建议如何查看这些日志.

Pls suggest how to view those logs.

我面临的第二个问题是如何在后台运行此过程并不断对其进行监视.让我们假设我关闭了控制台,然后该进程也应该正在运行,以及如何再次重新连接该进程控制台并查看实时流量.

Second problem I m facing is how to run this process in background and monitor it constantly. Lets suppose i closed the console then also this process should be running and how to reconnect that process console again and see real time traffic.

任何建议都会有所帮助.

Any suggestions will be helpful.

推荐答案

在下面运行此命令时,您正在启动tensorflow模型服务器的进程,该服务器在端口号(此处为9009)提供模型.

When you run this command below, you are starting a process of tensorflow model server which serves the model at a port number (9009 over here).

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 
--model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow- 
models-repository/ETA

您不是在此处显示日志,而是模型服务器正在运行.这就是屏​​幕停滞的原因.运行上述命令时,需要使用标志-v=1在控制台上显示日志

You are not displaying the logs here,but the model server running. This is the reason why the screen is stagnant. You need to use the flag -v=1 when you run the above command to display the logs on your console

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009 --model_name='model_name' --model_base_path=model_path

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009 --model_name='model_name' --model_base_path=model_path

现在转到记录/监视传入的请求/响应.当VLOG设置为1时,您将无法监视传入的请求/响应.VLOG被称为详细日志.您需要使用log level 3来显示所有错误,警告和一些与处理时间有关的信息性消息(INFO1和STAT1).请查看给定的链接以获取有关VLOGS的更多详细信息. http://webhelp.esri.com/arcims/9.2/general/topic/log_verbose.htm

Now moving to your logging/monitoring of incoming requests/responses. You cannot monitor the incoming requests/responses when the VLOG is set to 1. VLOGs is called Verbose logs. You need to use the log level 3 to display all errors, warnings, and some informational messages related to processing times (INFO1 and STAT1). Please look into the given link for further details on VLOGS. http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm

现在解决第二个问题.我建议您使用Tensorflow为export TF_CPP_MIN_VLOG_LEVEL=3提供的环境变量,而不是设置标志.在启动服务器之前,请设置环境变量.之后,请输入以下命令来启动服务器并将日志存储到名为mylog

Now moving your second problem. I would suggest you to use environment variables provided by Tensorflow serving export TF_CPP_MIN_VLOG_LEVEL=3 instead of setting flags. Set the environment variable before you start the server. After that, please enter the below command to start your server and store the logs to a logfile named mylog

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &.即使关闭控制台,所有日志也会在模型服务器运行时存储.希望这会有所帮助.

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &. Even though you close your console, all the logs gets stored as the model server runs. Hope this helps.

这篇关于记录由Tensorflow服务模型提供服务的请求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆