将一个远程文件放入hadoop而不复制到本地磁盘 [英] putting a remote file into hadoop without copying it to local disk
问题描述
我正在编写一个shell脚本,在数据生成后立即将数据放入hadoop。我可以ssh到我的主节点,将文件复制到那里的文件夹,然后将它们放入hadoop。我正在寻找一个shell命令来摆脱将文件复制到主节点上的本地磁盘。以更好地解释我需要什么,下面你可以找到我到目前为止:
$ b $ 1)将文件复制到主节点的本地磁盘:
scp test.txt用户名@ masternode:/文件夹名称/
我已经使用密钥设置了SSH连接。因此,不需要密码来执行此操作。
2)我可以使用ssh远程执行hadoop put命令:
ssh username @ masternodehadoop dfs -put /folderName/test.txt hadoopFolderName /
我正在寻找的是如何将这两个步骤合并/合并为一个,并跳过masterNode本地磁盘上的本地副本。
感谢
换句话说,我想以某种方式输入几个命令,我可以
试试这个(未经测试):
cat test.txt | ssh username @ masternodehadoop dfs -put-hadoopFoldername /
我使用了类似的技巧来复制目录周围:
tar cf - 。 | ssh remote(cd / destination&& tar xvf - )
本地 - tar
放入远程输入 - tar
。
I am writing a shell script to put data into hadoop as soon as they are generated. I can ssh to my master node, copy the files to a folder over there and then put them into hadoop. I am looking for a shell command to get rid of copying the file to the local disk on master node. to better explain what I need, here below you can find what I have so far:
1) copy the file to the master node's local disk:
scp test.txt username@masternode:/folderName/
I have already setup SSH connection using keys. So no password is needed to do this.
2) I can use ssh to remotely execute the hadoop put command:
ssh username@masternode "hadoop dfs -put /folderName/test.txt hadoopFolderName/"
what I am looking for is how to pipe/combine these two steps into one and skip the local copy of the file on masterNode's local disk.
thanks
In other words, I want to pipe several command in a way that I can
Try this (untested):
cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFoldername/"
I've used similar tricks to copy directories around:
tar cf - . | ssh remote "(cd /destination && tar xvf -)"
This sends the output of local-tar
into the input of remote-tar
.
这篇关于将一个远程文件放入hadoop而不复制到本地磁盘的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!