hbase / filesystem的hadoop namenode连接中的EOF异常的含义是什么? [英] What is the meaning of EOF exceptions in hadoop namenode connections from hbase/filesystem?

查看:143
本文介绍了hbase / filesystem的hadoop namenode连接中的EOF异常的含义是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这既是关于java EOF异常的一般问题,也是关于jar互操作性的Hadoop的EOF异常。任何一个主题的评论和答案都可以接受。

背景

我注意到一些线程讨论了一个神秘的例外,这最终是由readInt方法引起的。这个异常似乎有一些独立于hadoop的泛型含义,但最终是由Hadoop jar的互操作性引起的。



在我的例子中,我明白了我尝试在java中的hadoop中创建一个新的FileSystem对象。



问题

我的问题是:发生了什么,为什么阅读一个整数抛出EOF异常?这个EOF异常指的是什么文件,以及为什么如果两个罐子不能互操作会抛出这样的异常呢?其次,我也想知道如何解决这个错误,这样我就可以远程连接到使用hdfs协议和java api读写hadoops文件系统。 ...

 
java.io.IOException:调用/10.0.1.37:50070失败,出现本地异常:java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC $ Invoker.invoke(RPC.java:226)
at $ Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy( RPC.java:398)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java: (DFSClient.java:213)
at org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:180)
在org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.ja va:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
at org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:67 )$ or $ $ b $ org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:1548)
at org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:1530 )
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
at sb.HadoopRemote.main(HadoopRemote.java:35)
引起:java。 io.EOFException
在java.io.DataInputStream.readInt(DataInputStream.java:375)
在org.apache.hadoop.ipc.Client $ Connection.receiveResponse(Client.java:819)
at org.apache.hadoop.ipc.Client $ Connection.run(Client.java:720)


解决方案

关于hadoop:我修正了错误!您需要确保core-site.xml服务于0.0.0.0而不是127.0.0.1(localhost)。



如果您遇到EOF异常,则表示该端口不能在该ip上从外部访问,所以在hadoop客户端/服务器ipc之间不会读取数据。


This is both a general question about java EOF exceptions, as well as Hadoop's EOF exception which is related to jar interoperability. Comments and answers on either topic are acceptable.

Background

I'm noting some threads which discuss a cryptic exception, which is ultimately caused by a "readInt" method. This exception seems to have some generic implications which are independent of hadoop, but ultimately, is caused by interoperability of Hadoop jars.

In my case, I'm getting it when I try to create a new FileSystem object in hadoop, in java.

Question

My question is : What is happening and why does the reading of an integer throw an EOF exception ? What "File" is this EOF exception referring to, and why would such an exception be thrown if two jars are not capable of interoperating ?

Secondarily, I also would like to know how to fix this error so i can connect to and read/write hadoops filesystem using the hdfs protocol with the java api, remotely....

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb.HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

解决方案

Regarding hadoop : I fixed the error ! You need to make sure the core-site.xml is serving to 0.0.0.0 instead of 127.0.0.1(localhost).

If you get the EOF exception, it means that the port is not accessible externally on that ip, so there is no data to read between the hadoop client / server ipc.

这篇关于hbase / filesystem的hadoop namenode连接中的EOF异常的含义是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆