HDFS联合 [英] HDFS federation

查看:178
本文介绍了HDFS联合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于 HDFS联盟,我有几个基本问​​题。



是否有可能从位于集群联合中的其他名称节点读取在一个名称节点上创建的文件?



当前版本的 Hadoop 是否支持此功能?

解决方案

我解释名称节点联合如何工作的每



总结
$ b 名称节点是互斥的,不需要它们之间的通信。数据节点可以在多个名称节点之间共享。 为了回答你的问题,这是不可能的。 如果数据写入一个名称节点,则必须只联系该名称节点以获取数据。您无法询问其他名称节点。



关于您对数据复制的更新评论



当复制因子为3时,HDFS的放置策略是将一个副本放在本地机架的一个节点上,另一个放在本地机架的不同节点上,而最后一个放在另一个节点上在不同的机架节点 - 根据官方文档

如果本地RAC出现故障,您可以使用此功能并从其他数据中心获取数据。但请注意,您正在从一个联合名称节点读取数据,而不是从其他联合名称节点读取数据。



一个联合名称节点无法从其他联合名称节点读取数据。但是他们可以共享一组Datanodes进行读写操作。



编辑:



在每个联合中,您可以自动故障转移Namenode。如果活动NameNode在联合中失败,则备用Namenode将接管Active Namenode的责任。



有关更多详细信息,请参阅下面的SE post。

怎样的Hadoop的Namenode故障转移过程的工作?


I have few basic questions regarding HDFS Federation.

Is it possible to read file created on one name node from another name node which is in the cluster federation?

Does current version of Hadoop supports this feature?

解决方案

Let me explain how Name node federation works as per Apache web site

NameNode:

In order to scale the name service horizontally, federation uses multiple independent Namenodes/namespaces.

The Namenodes are federated; the Namenodes are independent and do not require coordination with each other.

The Datanodes are used as common storage for blocks by all the Namenodes. Each Datanode registers with all the Namenodes in the cluster. Datanodes send periodic heartbeats and block reports. They also handle commands from the Namenodes.

In Summary,

Name nodes are mutually exclusive and does not require communication between them. Data nodes can be shared across multiple name nodes.

To answer your question, It's not possible. if the data is written one name node, you have to contact that name node only to fetch the data. You can't ask other name node.

Regarding your updated comments on data replication,

When the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack - as per official documentation.

You can use this feature and get the data from other data centre if you have failures in local RAC. But note that you are reading data from one Federated Namenode and not from other Federated Namenode.

One Federated Namenode can't read data from other Federated Namenode. But they can share same set of Datanodes for read and write operations.

EDIT:

With-in each Federation, you can have automatic fail over of Namenode. If Active NameNode fails in a Federation, Stand-by Namenode will take over Active Namenode responsibilities.

Refer to below SE post for more details.

How does Hadoop Namenode failover process works?

这篇关于HDFS联合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆