使用ipcluster从slave节点读取stdout [英] Reading the stdout from slave nodes with ipcluster

查看:98
本文介绍了使用ipcluster从slave节点读取stdout的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用

ipcluster start --n=8

然后使用

from IPython.parallel import Client
c=Client()
dview=c[:]
e=[i for i in c]

我在从属节点上运行进程(e [0] -e [7]),这需要花费很多时间,我希望它们能够发送进度报告对于主人来说,我可以留意他们的距离。

I'm running processes on the slave nodes (e[0]-e[7]) which take a lot of time and I'd like them to send progress reports to the master so I can keep an eye on how far through they are.

我有两种方法可以做到这一点但到目前为止我还没有能够实现其中任何一个,尽管在问题页面上拖网几个小时。

There are two ways I can think to do this but so far I haven't been able to implement either of them, despite hours of trawling through question pages.

要么我希望节点在没有提示的情况下将一些数据推送回主节点。即在节点上运行的长进程中,我实现了一个函数,该函数定期将其进度传递给master。

Either I want the nodes to push some data back to the master without prompt. i.e. within the long process that is run on the nodes I implement a function which passes its progress to the master at regular intervals.

或者我可以重定向节点的stdout到主人的那个,然后用印刷来跟踪进度。这是我到目前为止所做的工作。每个节点都有自己的标准输出,因此如果远程运行,打印不会执行任何操作。我已经尝试将sys.stdout推送到节点但是这只是关闭它。

Or I could redirect the stdout of the nodes to the that of the master and then just keep track of the progress using print. This is what I've been working on so far. Each node has its own stdout so print doesn't do anything if run remotely. I've tried pushing sys.stdout to the nodes but this just closes it.

我不敢相信我是唯一想要这样做的人所以也许我错过了很简单的事情。如何使用ipython跟踪远程发生的长进程?

I can't believe I'm the only person who wants to do this so maybe I'm missing something very simple. How can I keep track of long processes happening remotely using ipython?

推荐答案

stdout已被捕获,记录和跟踪,并且已到达在结果完成之前,来自客户端。

stdout is already captured, logged, and tracked, and arrives at Clients as it comes, before the result is complete.

IPython附带一个示例脚本,用于监视所有引擎的stdout / err,可以轻松调整以仅监视此信息的子集等。

IPython ships with an example script that monitors stdout/err of all engines, which can easily be tweaked to only monitor a subset of this information, etc.

在客户端本身,您可以在结果之前检查stdout / err( Client.metadata [msg_id] .stdout )的元数据字典完成。使用 Client.spin()清除zeromq套接字上的所有传入消息,以确保此数据是最新的。

In the Client itself, you can check the metadata dict for stdout/err (Client.metadata[msg_id].stdout) before results are done. Use Client.spin() to flush any incoming messages off of the zeromq sockets, to ensure this data is up-to-date.

如果你想让stdout经常更新,请确保你调用 sys.stdout.flush()来保证该流实际上是在那时发布的,而不是依赖于隐式刷新,这可能直到工作完成才会发生。

If you want stdout to update frequently, make sure you call sys.stdout.flush() to guarantee that the stream is actually published at that point, rather than relying on implicit flushes, which may not happen until the work completes.

这篇关于使用ipcluster从slave节点读取stdout的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆