尝试写入hdfs时出错:服务器IPC版本9无法与客户机版本4通信 [英] Error when trying to write to hdfs: Server IPC version 9 cannot communicate with client version 4

查看:363
本文介绍了尝试写入hdfs时出错:服务器IPC版本9无法与客户机版本4通信的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用scala将文件写入hdfs,并且我一直收到以下错误:

 由org引起。 apache.hadoop.ipc.RemoteException:服务器IPC版本9不能与客户端版本4 
在org.apache的org.apache.hadoop.ipc.Client.call(Client.java:1113)
通信。 hadoop.ipc.RPC $ Invoker.invoke(RPC.java:229)
at com.sun.proxy。$ Proxy1.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method )
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
处的java.lang.reflect sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
。 Method.invoke(Method.java:606)
在org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
。在org.apache.hadoop.io.retry。 RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy。$ Proxy1.getProtocolVersion(Unknown Source)
at org.a pache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
处org.apache.hadoop org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
。 hdfs.DFSClient。< init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient。< init>(DFSClient.java:245)
at org.apache。 hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
处org.apache.hadoop.fs org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
。 FileSystem.access $ 200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem .get(FileSystem.java:263)
at bcomposes.twitter.Util $。< init>(TwitterStream.scala:39)
at bcomposes.twitter.Util $。< clinit>(TwitterStream .scala)
。在bcomposes.twitter.StatusStreamer $。主要(TwitterStream.scala:17)
。在bcomposes.twitter.StatusStreamer.main(TwitterStream.scala)
。在sun.reflect.NativeMethodAccessorImpl .invoke0(本地方法)
。在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke (Method.java:606)

我安装了hadoop后面的本教程。下面的代码是我用来向hdfs插入一个示例文件的。

  val配置=新配置(); 
val hdfs = FileSystem.get(新的URI(hdfs://192.168.11.153:54310),配置);
val file = new Path(hdfs://192.168.11.153:54310 / s2013 / batch / table.html);
if(hdfs.exists(file)){hdfs.delete(file,true); }
val os = hdfs.create(file);
val br = new BufferedWriter(new OutputStreamWriter(os,UTF-8));
br.write(Hello World);
br.close();
hdfs.close();

Hadoop版本为 2.4.0 ,我使用的hadoop库版本为的 1.2.1 即可。我应该做些什么才能做到这一点?

解决方案

hadoop spark version s应该同步。 (在我的例子中,我正在使用 spark-1.2.0 hadoop 2.2.0



第1步 - 转到 $ SPARK_HOME

第2步 - 只需 mvn build 带有 hadoop 客户端版本的火花

  mvn -Pyarn -Phadoop-2.2 -Dhadoop.version = 2.2.0 -DskipTests clean package 

第3步 - 也是spark项目应该有适当的spark版本,

  name:=smartad-spark-songplaycount

版本:=1.0

scalaVersion:=2.10.4

// libraryDependencies + =org.apache.spark%%spark-core%1.1.1
libraryDependencies + =org.apache.spark%spark-core_2。 10%1.2.0

libraryDependencies + =org.apache.hadoop%hadoop-client%2.2.0

libraryDependencies + =org.apache.hadoop%hadoop-hdfs%2.2.0

解析器+ =Akka Repository位于http://repo.akka。 io / releases /



参考文献



使用mvn构建apache spark


I am trying a write a file to hdfs using scala and I keep getting the following error

Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.ipc.Client.call(Client.java:1113)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at bcomposes.twitter.Util$.<init>(TwitterStream.scala:39)
at bcomposes.twitter.Util$.<clinit>(TwitterStream.scala)
at bcomposes.twitter.StatusStreamer$.main(TwitterStream.scala:17)
at bcomposes.twitter.StatusStreamer.main(TwitterStream.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)

I installed hadoop following this tutorial. The code below is what I use to insert a sample file to hdfs.

val configuration = new Configuration();
val hdfs = FileSystem.get( new URI( "hdfs://192.168.11.153:54310" ), configuration );
val file = new Path("hdfs://192.168.11.153:54310/s2013/batch/table.html");
if ( hdfs.exists( file )) { hdfs.delete( file, true ); } 
val os = hdfs.create( file);
val br = new BufferedWriter( new OutputStreamWriter( os, "UTF-8" ) );
br.write("Hello World");
br.close();
hdfs.close();

The Hadoop version is 2.4.0 and hadoop library version I use is 1.2.1. What change should I do to make this work?

解决方案

hadoop and spark versions should be in sync. (In my case, I am working with spark-1.2.0 and hadoop 2.2.0)

STEP 1 - goto $SPARK_HOME

STEP 2 - Simply mvn build spark with the version of hadoop client you want,

mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package

STEP 3 - Also spark project should have proper spark version,

name := "smartad-spark-songplaycount"

version := "1.0"

scalaVersion := "2.10.4"

//libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.1"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.2.0"

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.2.0"

libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.2.0"

resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

References

Building apache spark with mvn

这篇关于尝试写入hdfs时出错:服务器IPC版本9无法与客户机版本4通信的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆