如何把文件放到特定的节点上? [英] How to put files to specific node?

查看:90
本文介绍了如何把文件放到特定的节点上?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以告诉HDFS 在哪里存储特定文件?

用例



HDFS并希望针对这些数据运行作业/应用程序。但是,我也有第2批仍然需要加载。如果我可以在第一批上运行作业/应用程序,例如从1到10的节点,并将新数据加载到节点(如11到20)并完全并行,那将会很好。

最初,我认为

那么,有没有办法在HDFS中控制数据分布?而且它有意义吗?

从技术上讲,你可以,但我不会。

如果您想完全控制数据的位置,可以扩展 BlockPlacementPolicy (请参阅 hdfs如何选择一个datanode来存储)。这并不容易,我不推荐。

您可以采取措施,通过一些巧妙的设置来尽量减少两组节点之间的流量,以便利用机架感知功能。

Is it possible to tell HDFS where to store particular files?

Use case

I've just loaded batch #1 of files into HDFS and want to run job/application on these data. However, I also have batch #2 that is still to be loaded. It would be nice if I could run job/application on first batch on, say, nodes from 1 to 10, and load new data to nodes, say, 11 to 20, completely in parallel.

Initially I thought that NameNode federation (Hadoop 2.x) does exactly that, but it looks like federation only splits namespace, while DataNodes still provide blocks for all connected NameNodes.

So, is there a way to control the distribution of data in HDFS? And does it make sense at all?

解决方案

Technically, you can, but I wouldn't.

If you want full control over where the data goes, you can extend BlockPlacementPolicy (see how does hdfs choose a datanode to store). This won't be easy to do and I don't recommend it.

You can probably take steps to minimize the amount of traffic between your two sets of nodes with some clever setup to use rack-awareness to your advantage.

这篇关于如何把文件放到特定的节点上?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆