hadoop协议消息标记具有无效的线路类型 [英] hadoop Protocol message tag had invalid wire type

查看:237
本文介绍了hadoop协议消息标记具有无效的线路类型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Ubuntu 12.04上使用两个8核心节点来设置hadoop 2.6集群。 sbin / start-dfs.sh sbin / start-yarn.sh 都成功。我可以在主节点上看到 jps 后面的内容。

  22437 DataNode 
22988 ResourceManager
24668 Jps
22748 SecondaryNameNode
23244 NodeManager

从节点上的 jps 结果是

  19693 DataNode 
19966 NodeManager

然后运行PI示例。

bin / hadoop jar share / hadoop / mapreduce / hadoop-mapreduce-examples-2.6.0.jar pi 30 100

这给了我那里的错误日志

  java。 io.IOException:对本地异常失败:com.google.protobuf.InvalidProtocolBufferException:协议消息标记的线路类型无效。主机详细信息:本地主机是:Master-R5-Node / xxx.ww.y.zz;目标主机是:Master-R5-Node:54310; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
在org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke(ProtobufRpcEngine.java:232)
at com .sun.proxy。$ Proxy9.getFileInfo(未知来源)
。在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

HDFS文件系统自从尝试命令 bin / hdfs dfs -mkdir / user 与类似的例外失败。

  java.io.IOException:本地异常失败:com.google.protobuf.InvalidProtocolBufferException:协议消息标记的线路类型无效。主机详细信息:本地主机是:Master-R5-Node / xxx.ww.y.zz;目标主机是:Master-R5-Node:54310; 

其中 xxx.ww.y.zz 是Master-R5-Node的ip地址

我已经检查并遵守了 ConnectionRefused 在Apache和本网站上。

尽管经过了一周的努力,我无法修正它。



谢谢。

解决方案

有很多原因可能导致我面临的问题。但是我最终用下面的一些来修复它。



  1. 确保您拥有 / hadoop hdfs临时文件。 (你必须弄清楚你的具体情况)

  2. fs.defaultFS 中删除​​<$ c中的端口号$ C> $ HADOOP_CONF_DIR /芯-site.xml中。它应该是这样的:




 `< configuration> ; 
<属性>
<名称> fs.defaultFS< / name>
< value> hdfs://my.master.ip.address/< / value>
< description> NameNode URI< / description>
< / property>
< / configuration>`





  1. 将以下两个属性添加到`$ HADOOP_CONF_DIR / hdfs-site.xml




 <属性> 
< name> dfs.datanode.use.datanode.hostname< / name>
<值> false< /值>
< / property>

<属性>
<名称> dfs.namenode.datanode.registration.ip -hostname-check< / name>
<值> false< /值>
< / property>

瞧!你现在应该开始运行!


I Set up hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh and sbin/start-yarn.sh both succeed. And I can see the following after jps on the master node.

22437 DataNode
22988 ResourceManager
24668 Jps
22748 SecondaryNameNode
23244 NodeManager

The jps outcome on the slave node is

19693 DataNode
19966 NodeManager

I then run the PI example.

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100

Which gives me there error-log

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

The problem seems with the HDFS file system since trying out the command bin/hdfs dfs -mkdir /user fails with the similar exception.

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;

where xxx.ww.y.zz is the ip-address of Master-R5-Node

I have checked and followed all the recommendations of ConnectionRefused on Apache and on this site.

Despite the week long effort, I cannot get it fixed.

Thanks.

解决方案

There are so many reasons to what may lead to the problem I faced. But I finally ended up fixing it using some of the following things.

  1. Make sure that you have the needed permission to the /hadoop and hdfs temporary files. (you have to figure out where that is for your paticular case)
  2. remove the port number from fs.defaultFS in $HADOOP_CONF_DIR/core-site.xml. It should look like this:

`<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://my.master.ip.address/</value>
<description>NameNode URI</description>
</property>
</configuration>`

  1. Add the following two properties to `$HADOOP_CONF_DIR/hdfs-site.xml

 <property>
    <name>dfs.datanode.use.datanode.hostname</name>
    <value>false</value>
 </property>

  <property>
     <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
     <value>false</value>
  </property>

Voila! You should now be up and running!

这篇关于hadoop协议消息标记具有无效的线路类型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆