如何在Pig Latin中每行加载带有JSON数组的文件 [英] How to load a file with a JSON array per line in Pig Latin

查看:86
本文介绍了如何在Pig Latin中每行加载带有JSON数组的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现有脚本创建的文本文件每行带有JSON对象数组,例如

An existing script creates text files with an array of JSON objects per line, e.g.,

[{"foo":1,"bar":2},{"foo":3,"bar":4}]
[{"foo":5,"bar":6},{"foo":7,"bar":8},{"foo":9,"bar":0}]
…

我想将此数据加载到Pig中,分解数组并处理每个对象.

I would like to load this data in Pig, exploding the arrays and processing each individual object.

我曾经考虑过在Twitter的大象鸟中使用JsonLoader毫无用处.它并没有抱怨JSON,但是在运行以下命令时,我得到成功读取0条记录":

I have looked at using the JsonLoader in Twitter’s Elephant Bird to no avail. It doesn’t complain about the JSON, but I get "Successfully read 0 records" when running the following:

register '/tmp/elephant-bird/core/target/elephant-bird-core-4.3-SNAPSHOT.jar';
register '/tmp/elephant-bird/hadoop-compat/target/elephant-bird-hadoop-compat-4.3-SNAPSHOT.jar';
register '/tmp/elephant-bird/pig/target/elephant-bird-pig-4.3-SNAPSHOT.jar';
register '/usr/local/lib/json-simple-1.1.1.jar';

a = load '/path/to/file.json' using com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad=true');
dump a;

我还尝试了正常加载文件,将每一行都视为包含一个单列chararray,然后尝试将其解析为JSON,但是我找不到似乎可以解决问题的UDF

I have also tried loading the file as normal, treating each line as a containing a single column chararray, and then trying to parse that as JSON, but I can’t find a pre-existing UDF which seems to do the trick.

有什么想法吗?

推荐答案

就像唐纳德所说,您应该在此处使用UDF.在 Xplenty 中,我们编写了JsonStringToBag以补充ElephantBird的JsonStringToMap.

Like Donald said, you should use a UDF here. Here in Xplenty we wrote JsonStringToBag to complement ElephantBird's JsonStringToMap.

这篇关于如何在Pig Latin中每行加载带有JSON数组的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆