Pig:Hadoop 作业失败 [英] Pig: Hadoop jobs Fail
问题描述
我有一个从 csv 文件查询数据的 pig 脚本.
I have a pig script that queries data from a csv file.
该脚本已在本地使用小型和大型 .csv 文件进行测试.
The script has been tested locally with small and large .csv files.
在小集群中:它从处理脚本开始,并在完成 40% 的调用后失败
In Small Cluster: It starts with processing the scripts, and fails after completing 40% of the call
错误只是,无法从文件路径"读取数据
我的推断是,脚本可以读取文件,但是连接中断,消息丢失
What I infer is that, The script could read the file, but there is some connection drop, a message lose
但我只收到上述错误.
推荐答案
一般问题的答案是更改配置文件中的错误级别,将这两行添加到 mapred-site.xml
An answer for the General Problem would be changing the errors levels in the Configuration Files, adding these two lines to mapred-site.xml
log4j.logger.org.apache.hadoop = error,A
log4j.logger.org.apache.pig= error,A
就我而言,它是 OutOfMemory 异常
In my case, it aas an OutOfMemory Exception
这篇关于Pig:Hadoop 作业失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!