Apache PIG,ELEPHANTBIRDJSON加载程序 [英] Apache PIG, ELEPHANTBIRDJSON Loader
问题描述
我试图使用Elephantbird json loader在输入(在这个输入中有2条记录)解析
$ b
[{node_disk_lnum_1 :36,node_disk_xfers_in_rate_sum:136.40000000000001,node_disk_bytes_in_rate_22:
187392.0,node_disk_lnum_7:13}]
$ b[{node_disk_lnum_1:36, node_disk_xfers_in_rate_sum:
105.2,node_disk_bytes_in_rate_22:123084.8,node_disk_lnum_7:13}]
是我的语法:
register'/home/data/Desktop/elephant-bird-pig-4.1.jar ;
$ b a = LOAD'/pig/tc1.log'USING
com.twitter.elephantbird.pig.load.JsonLoader(' - nestedLoad')as(json:map []);
b = FOREACH GENERATE flatten(json#'node_disk_lnum_1')AS
node_disk_lnum_1,flatten(json#'node_disk_xfers_in_rate_sum')AS
node_disk_xfers_in_rate_sum,flatten(json#'node_disk_bytes_in_rate_22') AS
node_disk_bytes_in_rate_22,flatten(json#'node_disk_lnum_7')AS
node_disk_lnum_7;
描述b;
b描述结果:
b:{node_disk_lnum_1:bytearray,node_disk_xfers_in_rate_sum:
bytearray,node_disk_bytes_in_rate_22:bytearray,node_disk_lnum_7:
bytearray}
c = FOREACH b GENERATE node_disk_lnum_1;
DESCRIBE c;
c:{node_disk_lnum_1:bytearray}
DUMP c;
预期成果:
36,136.40000000000001,187392.0,13%b
$ b 36,105.2,123084.8,13
< blockquote>
抛出以下错误
06 01:05:49,337 [main] INFO
org.apache.pig.tools.pigstats.ScriptState - 在
脚本中使用的猪特征:UNKNOWN 2017-02-06 01:05:49,386 [main ]信息
org.apache.pig.data.SchemaTupleBackend - 密钥[pig.schematuple]不是
set ...不会生成代码。 2017-02-06 01:05:49,387 [main] INFO
org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer -
{RULES_ENABLED = [AddForEach,ColumnMapKeyPrune,ConstantCalculator,
GroupByConstParallelSetter,LimitOptimizer,LoadTypeCastInserter,
MergeFilter,MergeForEach,PartitionFilterOptimizer,
PredicatePushdownOptimizer,PushDownForEachFlatten,PushUpFilter,$ b $ SplitFilter,StreamTypeCastInserter]} 2017-02-06 01:05:49,390 [main]
INFO org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - 映射
所需的密钥:$ 0 - > [node_disk_lnum_1,
node_disk_xfers_in_rate_sum,node_disk_bytes_in_rate_22,
node_disk_lnum_7]
2017-02-06 01:05:49,395 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler
- 文件连接阈值:100乐观?假2017-02-06 01:05:49,398 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- 优化前MR计划大小:1 2017-02- 06 01:05:49,398 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- 优化后的MR计划大小:1 2017-02-06 01:05: 49,425 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig
脚本设置添加到作业中2017-02-06 01:05:49,426 [main]
INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler
- mapred.job.reduce.markreset.buffer.percent没有设置,默认设置为0.3 2017-02-06 01:05 :49,428 [main]错误
org.apache.pig.tools.grunt.Grunt - 错误2998:未处理的内部
错误。 com / twitter / elephantbird / util / HadoopCompat
请帮助我缺少什么?
解决方案您的json中没有任何嵌套数据,因此请移除-nestedload
a = LOAD'/pig/tc1.log'USING com.twitter.elephantbird.pig.load.JsonLoader()as(json:map []);
I'm trying to parse below input (there are 2 records in this input)using Elephantbird json loader
[{"node_disk_lnum_1":36,"node_disk_xfers_in_rate_sum":136.40000000000001,"node_disk_bytes_in_rate_22": 187392.0, "node_disk_lnum_7": 13}]
[{"node_disk_lnum_1": 36, "node_disk_xfers_in_rate_sum": 105.2,"node_disk_bytes_in_rate_22": 123084.8, "node_disk_lnum_7":13}]
Here is my syntax:
register '/home/data/Desktop/elephant-bird-pig-4.1.jar'; a = LOAD '/pig/tc1.log' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as (json:map[]); b = FOREACH a GENERATE flatten(json#'node_disk_lnum_1') AS node_disk_lnum_1,flatten(json#'node_disk_xfers_in_rate_sum') AS node_disk_xfers_in_rate_sum,flatten(json#'node_disk_bytes_in_rate_22') AS node_disk_bytes_in_rate_22, flatten(json#'node_disk_lnum_7') AS node_disk_lnum_7; DESCRIBE b;
b describe result:
b: {node_disk_lnum_1: bytearray,node_disk_xfers_in_rate_sum: bytearray,node_disk_bytes_in_rate_22: bytearray,node_disk_lnum_7: bytearray}
c = FOREACH b GENERATE node_disk_lnum_1; DESCRIBE c;
c: {node_disk_lnum_1: bytearray}
DUMP c;
Expected Result:
36, 136.40000000000001, 187392.0, 13
36, 105.2, 123084.8, 13
Throwing the below error
2017-02-06 01:05:49,337 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2017-02-06 01:05:49,386 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code. 2017-02-06 01:05:49,387 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]} 2017-02-06 01:05:49,390 [main] INFO org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Map key required for a: $0->[node_disk_lnum_1, node_disk_xfers_in_rate_sum, node_disk_bytes_in_rate_22, node_disk_lnum_7]
2017-02-06 01:05:49,395 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2017-02-06 01:05:49,398 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2017-02-06 01:05:49,398 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2017-02-06 01:05:49,425 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job 2017-02-06 01:05:49,426 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2017-02-06 01:05:49,428 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. com/twitter/elephantbird/util/HadoopCompat
Please help what am I missing?
解决方案You do not have any nested data in your json,so remove -nestedload
a = LOAD '/pig/tc1.log' USING com.twitter.elephantbird.pig.load.JsonLoader() as (json:map[]);
这篇关于Apache PIG,ELEPHANTBIRDJSON加载程序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!