Spark HBase加入错误:对象不可序列化类:org.apache.hadoop.hbase.client.Result [英] Spark HBase Join Error: object not serializable class: org.apache.hadoop.hbase.client.Result

查看:960
本文介绍了Spark HBase加入错误:对象不可序列化类:org.apache.hadoop.hbase.client.Result的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个hbase表中的数据,需要从中获取联合结果。



获得加入结果的最佳方式是什么?
我尝试使用RDD加入,但它给了我错误。
我收到以下错误。



object not serializable(class:org.apache.hadoop.hbase.client.Result

 
hbaseConf.set(TableInputFormat.INPUT_TABLE,table1 )
$ b $ val table1RDD = sc.newAPIHadoopRDD(hbaseConf,classOf [TableInputFormat],classOf [ImmutableBytesWritable],classOf [Result])。persist(StorageLevel.MEMORY_AND_DISK)

val table1Data = filteredRouters.map({case(rowkey:ImmutableBytesWritable,values:Result)=>(Bytes.toString(values.getValue(Bytes.toBytes(cf),Bytes.toBytes(col1)))values) })。persist(StorageLevel.MEMORY_AND_DISK)

// ------------- //

hbaseConf.set(TableInputFormat.INPUT_TABLE, interface)
val table2RDD = sc.newAPIHadoopRDD(hbaseConf,classOf [TableInputFormat],classOf [ImmutableBytesWritable],classOf [Result])。persist(StorageLevel.MEMORY_AND_DISK)

val table 2Data = loopBacks.map({case(rowkey:ImmutableBytesWritable,values:Result)=> (Bytes.toString(values.getValue(Bytes.toBytes(cf1),Bytes.toBytes(col1))),values)})。persist(StorageLevel.MEMORY_AND_DISK)
$ b $ interfaceData。 foreach({case(key:String,values:Result)=> {println(---> key is+ key)}})

//获取表数据//

val joinedRDD = routerData.join(interfaceData).persist(StorageLevel.MEMORY_AND_DISK);
joinedRDD.foreach({case((key:String,results:(Result,Result)))=>
{
println(key is+ key);
println(value is);
}
}

StackTrace:

  16/02/09 11:21:21 ERROR TaskSetManager:阶段6.0中的任务0.0(TID 6 )有一个不可序列化的结果:org.apache.hadoop.hbase.client.Result 
序列化堆栈:
- 对象不可序列化(class:org.apache.hadoop.hbase.client.Result,value: keyvalues = {
<我的数据>
});不重试
16/02/09 11:21:21信息TaskSchedulerImpl:已从池中删除任务完成的TaskSet 6.0
16/02/09 11:21:21 INFO DAGScheduler:Job 5失败:Foreach在LoopBacks.scala:92,花费0.103408 s
线程main中的异常org.apache.spark.SparkException:由于阶段失败而导致作业中止:阶段5.0(TID 5)中的任务0.0没有可序列化的结果:org.apache.hadoop.hbase.client.Result
序列化堆栈:


解决方案

我使用Spark Kyro Serialization解决了这个问题。



我添加了下面的代码: b
$ b

conf.set(spark.serializer,org.apache .bark.serializer.KryoSerializer)
conf.registerKryoClasses(Array(classOf [org.apache.hadoop.hbase.client.Result]))



解决了这个问题。

这也是一些其他类似问题的解决方案。

I have data across two hbase tables and need to get the joined result from them.

What is the best way to get joined result.? I tried joining using RDDs, but it gave me error. I am getting the following error.

object not serializable (class: org.apache.hadoop.hbase.client.Result

val hbaseConf = HBaseConfiguration.create();
    hbaseConf.set("hbase.zookeeper.quorum", "localhost")
    hbaseConf.set(TableInputFormat.INPUT_TABLE, "table1")

    val table1RDD = sc.newAPIHadoopRDD(hbaseConf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result]).persist(StorageLevel.MEMORY_AND_DISK)

    val table1Data = filteredRouters.map(  {case(rowkey:ImmutableBytesWritable, values:Result) => (Bytes.toString(values.getValue(Bytes.toBytes("cf"), Bytes.toBytes("col1"))), values) }).persist(StorageLevel.MEMORY_AND_DISK)

    //-------------//

    hbaseConf.set(TableInputFormat.INPUT_TABLE, "interface")
    val table2RDD = sc.newAPIHadoopRDD(hbaseConf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result]).persist(StorageLevel.MEMORY_AND_DISK)

    val table2Data = loopBacks.map(  {case(rowkey:ImmutableBytesWritable, values:Result) => (Bytes.toString(values.getValue(Bytes.toBytes("cf1"), Bytes.toBytes("col1"))), values) }).persist(StorageLevel.MEMORY_AND_DISK)

    interfaceData.foreach({case(key:String, values:Result) => {println("---> key is " + key)}})

    // Got the table data //

    val joinedRDD = routerData.join(interfaceData).persist(StorageLevel.MEMORY_AND_DISK);
    joinedRDD.foreach({case((key:String, results: (Result, Result))) => 
      {
        println(" key is " + key);
        println(" value is ");
      }
    }
    )

StackTrace:

16/02/09 11:21:21 ERROR TaskSetManager: Task 0.0 in stage 6.0 (TID 6) had a not serializable result: org.apache.hadoop.hbase.client.Result
Serialization stack:
    - object not serializable (class: org.apache.hadoop.hbase.client.Result, value: keyvalues={
<My Data>
}); not retrying
16/02/09 11:21:21 INFO TaskSchedulerImpl: Removed TaskSet 6.0, whose tasks have all completed, from pool 
16/02/09 11:21:21 INFO DAGScheduler: Job 5 failed: foreach at LoopBacks.scala:92, took 0.103408 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 5.0 (TID 5) had a not serializable result: org.apache.hadoop.hbase.client.Result
Serialization stack:

解决方案

I Solved this problem by using Spark Kyro Serialization.

I have added the following code

conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") conf.registerKryoClasses(Array(classOf[org.apache.hadoop.hbase.client.Result]))

That solved the problem.

This would be solution for some other similar problems as well.

这篇关于Spark HBase加入错误:对象不可序列化类:org.apache.hadoop.hbase.client.Result的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆