Python Hadoop Streaming Error“错误streaming.StreamJob:作业不成功!”和堆栈跟踪:ExitCodeException exitCode = 134 [英] Python Hadoop Streaming Error "ERROR streaming.StreamJob: Job not Successful!" and Stack trace: ExitCodeException exitCode=134

查看:3309
本文介绍了Python Hadoop Streaming Error“错误streaming.StreamJob:作业不成功!”和堆栈跟踪:ExitCodeException exitCode = 134的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在Hadoop集群上使用Hadoop Streaming运行python脚本进行情感分析。
我正在正在运行的本地机器上运行相同的脚本并提供输出。


在本地机器上运行我使用这个命令。

  $ cat / home / MB / analytics / Data / input / * | ./new_mapper.py 

并在hadoop集群上运行我使用下面的命令

  $ hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.5.0-mr1-cdh5 .2.0.jar -mapperpython $ PWD / new_mapper.py-reducer$ PWD / new_reducer.py-input / user / hduser / Test_04012015_Data / input / * -output / user / hduser / python-mr / out- mr-out 

脚本的示例代码是

 #!/ usr / bin / env python 
import sys

$ b def main(argv):
##代表sys.stdin中的行:
## print line
代表sys.stdin中的行:
line = line.split(',')
t_text = re.sub(r'[?| $ |。|!|,!!| | |;'',r'',line [7])
words = re.findall(r[\ w'] +,t_text.rstrip())
predict = classifier.classify(feature_select(words))
i = i + 1
referenceSets [predicted] .add(i)
testSets [预测]。添加(i)
print line [7] +'\ t'+预测

if __name__ ==__main__:
main(sys.argv)

Exception的堆栈跟踪是:

  15/04/22 12:55:14信息mapreduce.Job:任务ID:attempt_1429611942931_0010_m_000001_0,状态:FAILED 
错误:java.io.IOException:流在java中关闭。 lang.ProcessBuilder $ NullOutputStream.write(ProcessBuilder.java:434)
...

退出代码:134
异常消息:/ bin / bash:line 1:1691 Aborted
(核心转储)/ usr / lib / jvm / java-7-oracle-cloudera / bin / java
-Djava.net.preferIPv4Stack = true -Dhadoop.metrics.log.level = WARN -Djava .net.preferIPv4Stack = true -Xmx525955249
-Djava.io.tmpdir = / yarn / nm / usercache / hduser / appcache / application_1429611942931_0010 / container_1429611942931_0010_01_000016 / tmp
-Dlog4j.configuration = container-log4j.properties
-Dyarn.app.container.log.dir = / var / log / hadoop-yarn / container / application_1 429611942931_0010 / container_1429611942931_0010_01_000016 -Dyarn.app.container.log.filesize = 0
-Dhadoop.root.logger = INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.122 48725 attempt_1429611942931_0010_m_000006_1 16> / var / log / hadoop-yarn / container / application_1429611942931_0010 / container_1429611942931_0010_01_000016 / stdout 2> / var / log / hadoop-yarn / container / application_1429611942931_0010 / container_1429611942931_0010_01_000016 / stderr
....

15/04/22 12:55:47错误streaming.StreamJob:Job not Successful !
流式命令失败!

我试着看到日志,但色调显示出这个错误。

请告诉我,发生了什么问题。

解决方案

看起来您忘记将文件 new_mapper.py 添加到作业中。

基本上,你的工作试图运行python脚本 new_mapper.py ,但是这个脚本在您必须使用 -file< local_path_to_your_file> 选项将此文件添加到作业中>。



请参阅文档和示例: https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html#Streaming_Command_Options


I am trying to run python script on Hadoop cluster using Hadoop Streaming for sentiment analysis. The Same script I am running on Local machine which is running Properly and giving output.
to run on local machine I use this command.

$ cat /home/MB/analytics/Data/input/* | ./new_mapper.py

and to run on hadoop cluster I use below command

$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.5.0-mr1-cdh5.2.0.jar -mapper "python $PWD/new_mapper.py" -reducer "$PWD/new_reducer.py" -input /user/hduser/Test_04012015_Data/input/* -output /user/hduser/python-mr/out-mr-out

The Sample code of my script is

#!/usr/bin/env python
import sys


def main(argv):
##    for line in sys.stdin:
##        print line
    for line in sys.stdin:
        line = line.split(',')
        t_text      = re.sub(r'[?|$|.|!|,|!|?|;]',r'',line[7])
        words    = re.findall(r"[\w']+", t_text.rstrip())
        predicted = classifier.classify(feature_select(words))
        i=i+1
        referenceSets[predicted].add(i)
        testSets[predicted].add(i)
        print line[7] +'\t'+predicted

if __name__ == "__main__":
    main(sys.argv)

The stack trace of Exception is:

    15/04/22 12:55:14 INFO mapreduce.Job: Task Id : attempt_1429611942931_0010_m_000001_0, Status : FAILED
    Error: java.io.IOException: Stream closed at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434)
    ...

    Exit code: 134
    Exception message: /bin/bash: line 1:  1691 Aborted 
(core dumped) /usr/lib/jvm/java-7-oracle-cloudera/bin/java
-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Djava.net.preferIPv4Stack=true -Xmx525955249
-Djava.io.tmpdir=/yarn/nm/usercache/hduser/appcache/application_1429611942931_0010/container_1429611942931_0010_01_000016/tmp
-Dlog4j.configuration=container-log4j.properties
-Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016 -Dyarn.app.container.log.filesize=0 
-Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.122 48725 attempt_1429611942931_0010_m_000006_1 16 > /var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016/stdout 2> /var/log/hadoop-yarn/container/application_1429611942931_0010/container_1429611942931_0010_01_000016/stderr
    ....

    15/04/22 12:55:47 ERROR streaming.StreamJob: Job not Successful!
    Streaming Command Failed!

I tried to see logs but in hue it shows me this error. Please suggest me, what is going wrong.

解决方案

It looks like you forgot to add the file new_mapper.py to your job.

Basically, your job tries to run the python script new_mapper.py, but this script is missing on the server running your mapper.

You must add this file to your job, using the option -file <local_path_to_your_file>.

See documentation and example here: https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html#Streaming_Command_Options

这篇关于Python Hadoop Streaming Error“错误streaming.StreamJob:作业不成功!”和堆栈跟踪:ExitCodeException exitCode = 134的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆