异常“:org.apache.hadoop.ipc.RpcException:RPC响应超出最大数据长度"从java [英] Exception ": org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length" from java

查看:2974
本文介绍了异常“:org.apache.hadoop.ipc.RpcException:RPC响应超出最大数据长度"从java的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从在桌面Eclipse中运行的Java程序连接到远程HDFS.我可以连接.但是在尝试读取数据时出现此异常:

I am trying to connect to remote HDFS from Java program running in my desktop's Eclipse. I am able to connect. But get this Exception while trying to read data:

由以下原因引起:org.apache.hadoop.ipc.RpcException:RPC响应超出了最大数据

Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data

有人可以帮忙吗?

我有一个非常基本的代码来读取测试数据.错误来自hdfs.open();

I have a very basic code for reading test data.Error is coming from hdfs.open();

FileSystem hdfs =null;
    String uriPath = "hdfs://" + Constants.HOST + ":" + Constants.PORT+ "/test/hello_world.txt";
    String hadoopBase ="hdfs://" + Constants.HOST + ":" + Constants.PORT;
    Configuration conf = new Configuration();
    conf.set("fs.default.name", hadoopBase);
    URI uri;
    InputStream inputStream = null;
    try {
        uri = new URI(uriPath);
        hdfs =  FileSystem.get(uri, conf);
        Path path = new Path(uri);
        inputStream = hdfs.open(path);
        IOUtils.copyBytes(inputStream, System.out, 4096, false);
    } catch (URISyntaxException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    } finally {
        try {
            hdfs.close();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
        IOUtils.closeStream(inputStream);
    }

这是完整的例外情况:

java.io.IOException: Failed on local exception: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:785)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1485)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:826)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:815)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:804)
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:319)
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:281)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:270)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1115)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:325)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:321)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:333)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:786)
at DataUtil.readData(DataUtil.java:29)
at main(Main.java:24)
Caused by: org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1800)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1155)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1052)

推荐答案

检查您的 core-site.xml :

<property>
    <name>fs.default.name</name>
    <value>hdfs://host:port</value>
</property>

此端口可以是 9000 8020 . 确保您在代码或命令中使用了相同的端口

This port can be 9000 or 8020. Make sure that you are using the same port in your code or command

这篇关于异常“:org.apache.hadoop.ipc.RpcException:RPC响应超出最大数据长度"从java的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆