java.lang.ClassCastException:org.apache.hadoop.hbase.client.Result不能转换为org.apache.hadoop.hbase.client.Mutation [英] java.lang.ClassCastException: org.apache.hadoop.hbase.client.Result cannot be cast to org.apache.hadoop.hbase.client.Mutation

查看:953
本文介绍了java.lang.ClassCastException:org.apache.hadoop.hbase.client.Result不能转换为org.apache.hadoop.hbase.client.Mutation的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在将数据从一个hbase表转移到其他数据时发生错误

 
INFO mapreduce.Job:Task Id:attempt_1410946588060_0019_r_000000_2,Status:FAILED
错误:java.lang.ClassCastException:无法将org.apache.hadoop.hbase.client.Result转换为org.apache.hadoop.hbase上的org.apache.hadoop.hbase.client.Mutation
。 mapreduce.TableOutputFormat $ TableRecordWriter.write(TableOutputFormat.java:87)
at org.apache.hadoop.mapred.ReduceTask $ NewTrackingRecordWriter.write(ReduceTask.java:576)
at org.apache.hadoop。 mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer $ Context.write(WrappedReducer.java:105)
at org。 apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
在org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
在org.apache.hadoop。 mapred.ReduceTask.runNewReducer(ReduceTask。 java:645)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:405)
at org.apache.hadoop.mapred.YarnChild $ 2.run(YarnChild.java:162 )
在java.security.AccessController.doPrivileged(本地方法)
在javax.security.auth.Subject.doAs(Subject.java:396)
在org.apache.hadoop.security .UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)


$

 配置conf = HBaseConfiguration.create(); 

//定义扫描并定义列族进行扫描
扫描扫描=新扫描();
scan.addFamily(Bytes.toBytes(cf1));

// Job job = new Job(conf,ExampleSummary);
Job job = Job.getInstance(conf);

job.setJarByClass(HBaseDriver.class);
//
//定义输入hbase tableS
TableMapReduceUtil.initTableMapperJob(
test1,
scan,
HBaseMapper.class,
ImmutableBytesWritable.class,
Result.class,
job);
//定义输出表
TableMapReduceUtil.initTableReducerJob(
test2,
HBaseReducer.class,
job);

job.waitForCompletion(true);

我的映射器:

  public void map(ImmutableBytesWritable rowKey,Result列,上下文上下文)
抛出IOException,InterruptedException {

尝试{
//获取rowKey并将其转换为字符串
字符串inKey =新字符串(rowKey.get());
//设置只有日期的新键
字符串oKey = inKey.split(#)[0];
//首先获取字节格式的销售列,然后将其转换为字符串(因为它存储为来自hbase shell的字符串)
byte [] bSales = columns.getValue(Bytes.toBytes(cf1 ),Bytes.toBytes(sales));
String sSales = new String(bSales);
整数销售=新整数(sSales);
//发布日期和销售值
context.write(new ImmutableBytesWritable(oKey.getBytes()),new IntWritable(sales));

catch(RuntimeException e){
e.printStackTrace();

我的缩减器:

<
> public void reduce(ImmutableBytesWritable key,Iterable< IntWritable> values,Context context)
throws IOException,InterruptedException {
try {

int sum = 0;
//循环遍历不同的销售价值,并将其添加到总和
(IntWritable sales:values){
Integer intSales = new Integer(sales.toString());
sum + = intSales;
}

//使用rowkey作为日期创建hbase放置

$ b $放入insHBase = new Put(key.get());
//将总和值插入hbase
insHBase.add(Bytes.toBytes(cf1),Bytes.toBytes(sum),Bytes.toBytes(sum));
//将数据写入Hbase表
context.write(null,insHBase);

} catch(Exception e){
e.printStackTrace();
}
}


解决方案

I已经找到了解决方案,只需要改变这个:

  TableMapReduceUtil.initTableMapperJob(
test1,
scan,
HBaseMapper.class,
ImmutableBytesWritable.class,
Result.class,
job); b




TableMapReduceUtil.initTableMapperJob(
test1,
scan,
HBaseMapper.class,
ImmutableBytesWritable.class,
IntWritable.class,
工作);


getting error while transferring value from one hbase table to other

INFO mapreduce.Job: Task Id : attempt_1410946588060_0019_r_000000_2, Status : FAILED
Error: java.lang.ClassCastException: org.apache.hadoop.hbase.client.Result cannot be cast to org.apache.hadoop.hbase.client.Mutation
        at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:87)
        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:576)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
        at org.apache.hadoop.mapreduce.Reducer.reduce(Reducer.java:150)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:645)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:405)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)

My driver class:

  Configuration conf = HBaseConfiguration.create();

     // define scan and define column families to scan
     Scan scan = new Scan();
     scan.addFamily(Bytes.toBytes("cf1"));

    // Job job = new Job(conf,"ExampleSummary");
     Job job =Job.getInstance(conf); 

        job.setJarByClass(HBaseDriver.class);
        //
     // define input hbase tableS
        TableMapReduceUtil.initTableMapperJob(
         "test1",
            scan,
            HBaseMapper.class,
            ImmutableBytesWritable.class,
            Result.class,
            job);
     // define output table
        TableMapReduceUtil.initTableReducerJob(
          "test2",
          HBaseReducer.class, 
          job);

        job.waitForCompletion(true);

My mapper:

 public void map(ImmutableBytesWritable rowKey, Result columns, Context context)
   throws IOException, InterruptedException {

    try {
           // get rowKey and convert it to string
           String inKey = new String(rowKey.get());
           // set new key having only date
           String oKey = inKey.split("#")[0];
           // get sales column in byte format first and then convert it to string (as it is stored as string from hbase shell)
           byte[] bSales = columns.getValue(Bytes.toBytes("cf1"), Bytes.toBytes("sales"));
           String sSales = new String(bSales);
           Integer sales = new Integer(sSales);
           // emit date and sales values
           context.write(new ImmutableBytesWritable(oKey.getBytes()), new IntWritable(sales));

          } catch (RuntimeException e){
           e.printStackTrace();
          }

My reducer:

 public void reduce(ImmutableBytesWritable key, Iterable<IntWritable> values, Context context) 
           throws IOException, InterruptedException {
          try {

           int sum = 0;
           // loop through different sales vales and add it to sum
           for (IntWritable sales : values) {
            Integer intSales = new Integer(sales.toString());
            sum += intSales;
           } 

           // create hbase put with rowkey as date


           Put insHBase = new Put(key.get());
           // insert sum value to hbase 
           insHBase.add(Bytes.toBytes("cf1"), Bytes.toBytes("sum"), Bytes.toBytes(sum));
           // write data to Hbase table
           context.write(null, insHBase);

          } catch (Exception e) {
           e.printStackTrace();
          }
         }

解决方案

I have found the solution,just have to change

this:

TableMapReduceUtil.initTableMapperJob(
     "test1",
        scan,
        HBaseMapper.class,
        ImmutableBytesWritable.class,
        Result.class,
        job);

to this:

TableMapReduceUtil.initTableMapperJob(
     "test1",
        scan,
        HBaseMapper.class,
        ImmutableBytesWritable.class,
        IntWritable.class,
        job);

这篇关于java.lang.ClassCastException:org.apache.hadoop.hbase.client.Result不能转换为org.apache.hadoop.hbase.client.Mutation的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆