httpfs错误在状态待机状态下不支持操作类别READ [英] httpfs error Operation category READ is not supported in state standby
问题描述
我正在使用hadoop apache 2.7.1,并且我有一个由3个节点组成的集群
I am working on hadoop apache 2.7.1 and I have a cluster that consists of 3 nodes
nn1
nn2
dn1
nn1
nn2
dn1
nn1是dfs.default.name,因此它是主名称节点.
nn1 is the dfs.default.name, so it is the master name node.
我已经安装了httpfs并在重新启动所有服务之后当然将其启动.当nn1处于活动状态而nn2处于待机状态时,我可以发送此请求
I have installed httpfs and started it of course after restarting all the services. When nn1 is active and nn2 is standby I can send this request
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
从我的浏览器中显示
,并显示为此文件打开或保存的对话框,但是当我杀死在nn1上运行的名称节点并再次正常启动它时,由于高可用性,nn1变为待机状态, nn2变为活动状态.
from my browser and a dialog of open or save for this file appears, but when I kill the name node running on nn1 and start it again as normal then because of high availability nn1 becomes standby and nn2 becomes active.
因此,即使nn1处于待机状态,httpfs也应该可以正常工作,但是现在发送相同的请求
So here httpfs should work, even if nn1 becomes stand by, but sending the same request now
http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root
给我错误
{"RemoteException":{"message":"Operation category READ is not supported in state standby","exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}
httpfs不能克服nn1待机状态并带来文件吗?是因为配置错误,还是还有其他原因?
Shouldn't httpfs overcome nn1 standby status and bring the file? Is that because of a wrong configuration, or is there any other reason?
我的core-site.xml
是
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
推荐答案
HttpFs
似乎尚未识别高可用性.这可能是由于客户端与当前的Active Namenode连接所需的缺少配置.
It looks like HttpFs
is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.
确保为core-site.xml
中的fs.defaultFS
属性配置了正确的nameservice ID
.
Ensure the fs.defaultFS
property in core-site.xml
is configured with the correct nameservice ID
.
如果hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
然后在core-site.xml
中应该是
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
还要配置Java类的名称,DFS客户端将使用该Java类来确定哪个NameNode是当前处于活动状态并正在为客户端请求提供服务的NameNode.
Also configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the currently Active and is serving client requests.
将此属性添加到hdfs-site.xml
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
在所有节点中添加属性后,重新启动Namenodes和HttpF.
Restart the Namenodes and HttpFs after adding the properties in all nodes.
这篇关于httpfs错误在状态待机状态下不支持操作类别READ的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!