Python模拟远程tail -f? [英] Python to emulate remote tail -f?

查看:39
本文介绍了Python模拟远程tail -f?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有多个应用服务器和一个中央监控服务器.

We have several application servers, and a central monitoring server.

我们目前正在从监控服务器运行带有tail -f"的 ssh,以从应用服务器实时流式传输多个文本日志文件.

We are currently running ssh with "tail -f" from the monitoring server to stream several text logfiles in realtime from the app servers.

问题,除了整个方法的脆弱性之外,杀死 ssh 进程有时会留下僵尸尾进程.我们已经尝试使用 -t 创建伪终端,但它有时仍然会留下僵尸进程,而且 -t 显然还会导致我们正在使用的作业调度产品的其他地方出现问题.

The issue, apart from the brittleness of the whole approach is that killing the ssh process can sometimes leave zombie tail processes behind. We've mucked around with using -t to create pseudo-terminals, but it still sometimes leaves the zombie processes around, and -t is apparently also causing issues elsewhere with the job scheduling product we're using.

作为一个廉价而肮脏的解决方案,直到我们能够获得适当的集中日志记录(希望是 Logstash 和 RabbitMQ),我希望编写一个简单的 Python 包装器,它将启动 ssh 和tail -f",仍然捕获输出,但将 PID 存储到磁盘上的文本文件中,以便我们稍后可以在需要时终止相应的尾部进程.

As a cheap-and-dirty solution until we can get proper centralised logging (Logstash and RabbitMQ, hopefully), I'm hoping to write a simple Python wrapper that will start ssh and "tail -f", still capture the output, but store the PID to a textfile on disk so we can kill the appropriate tail process later if need be.

我起初尝试使用 subprocess.Popen,但后来我遇到了实际实时获取tail -f"输出的问题(然后需要将其重定向到文件)-显然会有一个主机阻塞/缓冲问题.

I at first tried using subprocess.Popen, but then I hit issues with actually getting the "tail -f" output back in realtime (which then needs to be redirected to a file) - apparently there are going to be a host of blocking/buffer issues.

一些来源似乎推荐使用 pexpect 或 pxssh 或类似的东西.理想情况下,如果可能的话,我只想使用 Python 并且它包含库 - 但是,如果库确实是执行此操作的唯一方法,那么我对此持开放态度.

A few sources seemed to recommend using pexpect, or pxssh or something like that. Ideally I'd like to use just Python and it's included libraries, if possible - however, if a library is really the only way to do this, then I'm open to that.

是否有一种很好的简单方法可以让 Python 使用tail -f"启动 ssh,将输出实时打印到本地 STDOUT(以便我可以重定向到本地文件),并将 PID 保存到一个文件稍后杀死?或者,即使我不将 ssh 与 tail -f 一起使用,某种方式仍然可以(近乎)实时地流式传输远程文件,包括将 PID 保存到文件中?

Is there a nice easy way of getting Python to start up ssh with "tail -f", get the output in realtime printed to local STDOUT here (so I can redirect to a local file), and also saving the PID to a file to kill later? Or even if I don't use ssh with tail -f, some way of still streaming a remote file in (near) realtime that includes saving the PID to a file?

干杯,维克多

只是为了澄清 - 我们希望当我们终止 SSH 进程时尾部进程终止.

Just to clarify - we want the tail process to die when we kill the SSH process.

我们想从监控服务器启动 ssh 和tail -f",然后当我们 Ctlr-C 时,远程机器上的尾部进程也应该死亡 - 我们想要它留下来.通常使用 -t 的 ssh 应该可以修复它,但由于我不明白的原因,它并不完全可靠,并且它不能很好地与我们的作业调度配合使用.

We want to start ssh and "tail -f" from the monitoring server, then when we Ctlr-C that, the tail process on the remote box should die as well - we don't want it to stay behind. Normally ssh with -t should fix it, but it isn't fully reliable, for reasons I don't understand, and it doesn't play nicely with our job scheduling.

因此,使用 screen 使另一端的进程保持活动状态并不是我们想要的.

Hence, using screen to keep the process alive at the other end is not what we want.

推荐答案

paramiko 模块支持通过 ssh 与 python 连接.

The paramiko module supports connecting with via ssh with python.

http://www.lag.net/paramiko/

pysftp 有一些使用它的例子,execute command 方法可能是你要找的.它将创建一个类似于您执行的命令的对象的文件.不过,我不能说它是否为您提供实时数据.

The pysftp has some examples of using it and the execute command method might be what your looking for. It will create a file like object of the command you execute. I can't say if it gives you live data though.

http://code.google.com/p/pysftp/

这篇关于Python模拟远程tail -f?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆