通过SSH从多个远程主机收集STDOUT时的数据完整性问题 [英] Data integrity question when collecting STDOUTs from multiple remote hosts over SSH

查看:224
本文介绍了通过SSH从多个远程主机收集STDOUT时的数据完整性问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设您运行以下命令:


ssh $host1 'while [ 1 ]; do sleep 1; echo "Hello from $HOSTNAME"; done' > /tmp/output
ssh $host2 'while [ 1 ]; do sleep 1; echo "Hello from $HOSTNAME"; done' >> /tmp/output
ssh $host3 'while [ 1 ]; do sleep 1; echo "Hello from $HOSTNAME"; done' >> /tmp/output

然后输出看起来像:


Hello from host1
Hello from host2
Hello from host3
Hello from host1
...

但是如果我将其更改为


ssh $host1 'while [ 1 ]; do sleep 1; cat /some/large/file1.txt; done' > /tmp/output
ssh $host2 'while [ 1 ]; do sleep 1; cat /some/large/file2.txt; done' >> /tmp/output
ssh $host3 'while [ 1 ]; do sleep 1; cat /some/large/file3.txt; done' >> /tmp/output

,以便每个主机的stdout不能放入单个缓冲区吗?在这种情况下,是否可以维护file [1-3] .txt的数据
的完整性,而不是顺序的完整性?
是否有可能将其他文件的文件片段插入这样的其他文件的中间
中?

so that stdout from each host won't fit into a single buffer? Would the data integrity of file[1-3].txt, and not the order, be maintained in this case? Is there a possibility that a file fragment of some other file slips in the middle of some other file like this?


[file1_fragment1] [file2_fragment1] [file1_fragment2] [file1_fragment3] [file3_fragment1] ...


推荐答案

我会说发生这种情况的可能性几乎是100%;-)假设通过网络对 cat 个文件花费的时间很长。

I would say the possibility of that happening is pretty much 100% ;-) assuming the time taken to cat one file over the network is long.

数据将以与接收时大致相同的顺序写入本地系统上的 / tmp / output 。在不中断#1之前,shell不会保留来自ssh命令#2或#3的数据,此外,它将不知道文件1的每次迭代的结束位置。

The data will be written to /tmp/output on the local system in approximately the same order that it is received. The shell doesn't know to hold on to data that comes from ssh command #2 or #3 until there's a break in #1, and besides, it will have no idea where the end of each iteration of file 1 comes.

这篇关于通过SSH从多个远程主机收集STDOUT时的数据完整性问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆