Oozie从外壳作业操作中压制日志? [英] Oozie supress logging from shell job action?

查看:72
本文介绍了Oozie从外壳作业操作中压制日志?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行Shell脚本的简单工作流程(见下文). Shell脚本运行pyspark脚本,该脚本将文件从本地移动到hdfs文件夹.

I have a simple workflow (see below) which runs a shell script. The shell script runs pyspark script, which moves file from local to hdfs folder.

当我运行shell脚本本身时,它可以完美运行,通过shell脚本中的> spark.txt 2>& 1将日志重定向到文件夹.

When I run the shell script itself, it works perfectly, logs are redirect to a folder by > spark.txt 2>&1 right in the shell script.

但是,当我按照以下工作流程提交oozie作业时,shell的输出似乎被抑制了.我试图重定向所有可能的oozie日志(-verbose -log)> oozie.txt 2>& 1,但这没有帮助.

But when I submit oozie job with following workflow, output from shell seems to be supressed. I tried to redirect all possible oozie logs (-verbose -log) > oozie.txt 2>&1, but it didn't help.

工作流成功完成(状态为SUCCESSEDED,没有错误日志),但是我看到该文件夹​​未复制到hdfs,但是当我单独运行它(而不是通过oozie)时,一切都很好.

The workflow is finished successfuly (status SUCCESSEDED, no error log), but I see, the folder is not copied to hdfs, however when I run it alone (not through oozie), everything is fine.

<action name="forceLoadFromLocal2hdfs">
<shell xmlns="uri:oozie:shell-action:0.1">
  <job-tracker>${jobTracker}</job-tracker>
  <name-node>${nameNode}</name-node>
  <configuration>
    <property>
      <name>mapred.job.queue.name</name>
      <value>${queueName}</value>
    </property>
  </configuration>
  <exec>driver-script.sh</exec>
  <argument>s</argument>
  <argument>script.py</argument>
  <!-- arguments for py script -->
  <argument>hdfsPath</argument>
  <argument>localPath</argument>
  <file>driver-script.sh#driver-script.sh</file>
</shell>
<ok to="end"/>
<error to="killAction"/>

非常感谢!

谢谢我在

yarn -logs -applicationId [application_xxxxxx_xxxx] 

推荐答案

对我在

yarn -logs -applicationId [application_xxxxxx_xxxx] 

这篇关于Oozie从外壳作业操作中压制日志?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆