对存储在Hive中的数据运行Pig查询 [英] Running Pig query over data stored in Hive

查看:82
本文介绍了对存储在Hive中的数据运行Pig查询的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道如何运行以Hive格式存储的Pig查询。我配置了Hive来存储压缩数据(使用本教程 http://wiki.apache.org/的hadoop /蜂房/ CompressedStorage )。

在此之前,我曾经使用Hive的分隔符(^ A)使用正常的Pig加载函数。但是现在Hive将数据存储在具有压缩的序列文件中。使用哪个加载函数?



请注意,不需要像这里提到的那样紧密集成:使用Hive with Pig ,使用何种加载函数来读取Hive生成的压缩序列文件。



感谢所有的答案。

解决方案

下面是我发现的:
如果将数据存储为RCFile,则使用HiveColumnarLoader是有意义的。要使用此方法加载表,您需要首先注册一些jar文件:

  register /srv/pigs/piggybank.jar 
register /usr/lib/hive/lib/hive-exec-0.5.0.jar
register /usr/lib/hive/lib/hive-common-0.5.0.jar

a = LOAD'/ user / hive / warehouse / table'USING org.apache.pig.piggybank.storage.HiveColumnarLoader('ts int,user_id int,url string');

要从Sequence文件加载数据,您必须使用PiggyBank(如前例)。 Piggybank的SequenceFile加载器应该处理压缩文件:

  register /srv/pigs/piggybank.jar 
DEFINE SequenceFileLoader org。 apache.pig.piggybank.storage.SequenceFileLoader();
a = LOAD'/ user / hive / warehouse / table'使用SequenceFileLoader AS(int,int);

这对Pig 0.7无效,因为它无法读取BytesWritable类型并将其转换为Pig类型你得到这个异常:

  2011-07-01 10:30:08,589 WARN org.apache.pig.piggybank.storage .SequenceFileLoader:无法将key类org.apache.hadoop.io.BytesWritable转换为Pig数据类型
2011-07-01 10:30:08,625 WARN org.apache.hadoop.mapred.Child:运行child的错误
org.apache.pig.backend.BackendException:错误0:无法将类org.apache.hadoop.io.BytesWritable转换为org.apache.pig.piggybank.storage.SequenceFileLoader中的Pig数据类型
。 setKeyType(SequenceFileLoader.java:78)
at org.apache.pig.piggybank.storage.SequenceFileLoader.getNext(SequenceFileLoader.java:132)
at org.apache.pig.backend.hadoop.executionengine。 mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:142)
at org.apache.hadoop.mapred.MapTask $ NewTrackingRecordReader.nextKeyValue(MapTas
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:k.java:448)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:315)
at org.apache.hadoop.mapred.Child $ 4.run(Child.java:217)$ b $ at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject .doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
at org.apache.hadoop.mapred.Child.main(Child .java:211)

如何编译piggybank在这里描述:无法建立piggybank - > / home / build / ivy / lib不存在


I would like to know how to run Pig queries stored in Hive format. I have configured Hive to store compressed data (using this tutorial http://wiki.apache.org/hadoop/Hive/CompressedStorage).

Before that I used to just use normal Pig load function with Hive's delimiter (^A). But now Hive stores data in sequence files with compression. Which load function to use?

Note that don't need close integration like mentioned here: Using Hive with Pig, just what load function to use to read compressed sequence files generated by Hive.

Thanks for all the answers.

解决方案

Here's what I found out: Using HiveColumnarLoader makes sense if you store data as a RCFile. To load table using this you need to register some jars first:

register /srv/pigs/piggybank.jar
register /usr/lib/hive/lib/hive-exec-0.5.0.jar
register /usr/lib/hive/lib/hive-common-0.5.0.jar

a = LOAD '/user/hive/warehouse/table' USING org.apache.pig.piggybank.storage.HiveColumnarLoader('ts int, user_id int, url string');

To load data from Sequence file you have to use PiggyBank (as in previous example). SequenceFile loader from Piggybank should handle compressed files:

register /srv/pigs/piggybank.jar
DEFINE SequenceFileLoader org.apache.pig.piggybank.storage.SequenceFileLoader();
a = LOAD '/user/hive/warehouse/table' USING SequenceFileLoader AS (int, int);

This doesn't work with Pig 0.7 because it's unable to read BytesWritable type and cast it to Pig type and you get this exception:

2011-07-01 10:30:08,589 WARN org.apache.pig.piggybank.storage.SequenceFileLoader: Unable to translate key class org.apache.hadoop.io.BytesWritable to a Pig datatype
2011-07-01 10:30:08,625 WARN org.apache.hadoop.mapred.Child: Error running child
org.apache.pig.backend.BackendException: ERROR 0: Unable to translate class org.apache.hadoop.io.BytesWritable to a Pig datatype
    at org.apache.pig.piggybank.storage.SequenceFileLoader.setKeyType(SequenceFileLoader.java:78)
    at org.apache.pig.piggybank.storage.SequenceFileLoader.getNext(SequenceFileLoader.java:132)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:142)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:448)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:315)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
    at org.apache.hadoop.mapred.Child.main(Child.java:211)

How to compile piggybank is described here: Unable to build piggybank -> /home/build/ivy/lib does not exist

这篇关于对存储在Hive中的数据运行Pig查询的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆