Oozie shell 脚本操作 [英] Oozie shell script action

查看:34
本文介绍了Oozie shell 脚本操作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在探索 Oozie 管理 Hadoop 工作流的功能.我正在尝试设置一个调用一些 hive 命令的 shell 操作.我的 shell 脚本 hive.sh 看起来像:

I am exploring the capabilities of Oozie for managing Hadoop workflows. I am trying to set up a shell action which invokes some hive commands. My shell script hive.sh looks like:

#!/bin/bash
hive -f hivescript

hive 脚本(已独立测试)在哪里创建一些表等等.我的问题是在哪里保留 hivescript,然后如何从 shell 脚本中引用它.

Where the hive script (which has been tested independently) creates some tables and so on. My question is where to keep the hivescript and then how to reference it from the shell script.

我尝试了两种方法,首先使用本地路径,例如 hive -f/local/path/to/file,以及使用上述相对路径,hive -fhivescript,在这种情况下,我将 hivescript 保存在 oozie 应用程序路径目录中(与 hive.sh 和工作流.xml 相同),并将其设置为通过工作流.xml 进入分布式缓存.

I've tried two ways, first using a local path, like hive -f /local/path/to/file, and using a relative path like above, hive -f hivescript, in which case I keep my hivescript in the oozie app path directory (same as hive.sh and workflow.xml) and set it to go to the distributed cache via the workflow.xml.

使用这两种方法,我都会收到错误消息:主类 [org.apache.oozie.action.hadoop.ShellMain],退出代码 [1]" 在 oozie Web 控制台上.此外,我尝试在 shell 脚本中使用 hdfs 路径,但据我所知这不起作用.

With both methods I get the error message: "Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]" on the oozie web console. Additionally I've tried using hdfs paths in shell scripts and this does not work as far as I know.

我的 job.properties 文件:

My job.properties file:

nameNode=hdfs://sandbox:8020
jobTracker=hdfs://sandbox:50300   
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozieProjectRoot=${nameNode}/user/sandbox/poc1
appPath=${oozieProjectRoot}/testwf
oozie.wf.application.path=${appPath}

还有workflow.xml:

And workflow.xml:

<shell xmlns="uri:oozie:shell-action:0.1">

    <job-tracker>${jobTracker}</job-tracker>

    <name-node>${nameNode}</name-node>

    <configuration>

        <property>

            <name>mapred.job.queue.name</name>

            <value>${queueName}</value>

        </property>

    </configuration>

    <exec>${appPath}/hive.sh</exec>

    <file>${appPath}/hive.sh</file> 

    <file>${appPath}/hive_pill</file>

</shell>

<ok to="end"/>

<error to="end"/>

</action>

<end name="end"/>

我的目标是使用oozie通过shell脚本调用hive脚本,请大家给点建议.

My objective is to use oozie to call a hive script through a shell script, please give your suggestions.

推荐答案

关于 Oozie 工作流的一个问题是 bash 脚本的执行.Hadoop 被创建为大规模并行,因此架构的行为与您想象的非常不同.

One thing that has always been tricky about Oozie workflows is the execution of bash scripts. Hadoop is created to be massively parallel so the architecture acts very different than you would think.

当 oozie 工作流执行 shell 操作时,它将从您的作业跟踪器或集群中任何节点上的 YARN 接收资源.这意味着为您的文件使用本地位置将不起作用,因为本地存储仅在您的边缘节点上.如果作业碰巧在您的边缘节点上产生,那么它会起作用,但在其他任何时候它都会失败,并且这种分布是随机的.

When an oozie workflow executes a shell action, it will receive resources from your job tracker or YARN on any of the nodes in your cluster. This means that using a local location for your file will not work, since the local storage is exclusively on your edge node. If the job happened to spawn on your edge node then it would work, but any other time it would fail, and this distribution is random.

为了解决这个问题,我发现最好将我需要的文件(包括 sh 脚本)放在 hdfs 中的 lib 空间或与我的工作流程相同的位置.

To get around this, I found it best to have the files I needed (including the sh scripts) in hdfs in either a lib space or the same location as my workflow.

这里有一个很好的方法可以帮助您实现目标.

Here is a good way to approach what you are trying to achieve.

<shell xmlns="uri:oozie:shell-action:0.1">

    <exec>hive.sh</exec> 
    <file>/user/lib/hive.sh#hive.sh</file>
    <file>ETL_file1.hql#hivescript</file>

</shell>

你会注意到的一件事是 exec 只是 hive.sh,因为我们假设文件将被移动到 shell 操作完成的基本目录

One thing you will notice is that the exec is just hive.sh since we are assuming that the file will be moved to the base directory where the shell action is completed

为了确保最后一个注释是正确的,您必须包含文件的 hdfs 路径,这将强制 oozie 使用该操作分发该文件.在你的情况下,hive 脚本启动器应该只编码一次,并且只需提供不同的文件. 由于我们有一对多的关系,hive.sh 应该保存在一个 lib 中,而不是随每个工作流程.

To make sure that last note is true, you must include the file's hdfs path, this will force oozie to distribute that file with the action. In your case, the hive script launcher should only be coded once, and simply fed different files. Since we have a one to many relationship, the hive.sh should be kept in a lib and not distributed with every workflow.

最后你看到一行:

<file>ETL_file1.hql#hivescript</file>

这一行做了两件事.在# 之前,我们有文件的位置.这只是文件名,因为我们应该使用我们的工作流程分发我们不同的 hive 文件

This line does two things. Before the # we have the location of the file. It is just the file name since we should distribute our distinct hive files with our workflows

user/directory/workflow.xml
user/directory/ETL_file1.hql

并且运行 sh 的节点将自动将其分发给它.最后,# 后面的部分是我们在 sh 脚本中为它分配的两个变量名.这使您能够一遍又一遍地重复使用相同的脚本,并简单地向它提供不同的文件.

and the node running the sh will have this distributed to it automagically. Lastly, the part after the # is the variable name we assign it two inside of the sh script. This gives you the ability to reuse the same script over and over and simply feed it different files.

HDFS 目录注释,

如果文件嵌套在与工作流相同的目录中,那么您只需要指定子路径:

if the file is nested inside the same directory as the workflow, then you only need to specify child paths:

user/directory/workflow.xml
user/directory/hive/ETL_file1.hql

会产生:

<file>hive/ETL_file1.hql#hivescript</file>

但如果路径在工作流目录之外,您将需要完整路径:

But if the path is outside of the workflow directory you will need the full path:

user/directory/workflow.xml
user/lib/hive.sh

会产生:

<file>/user/lib/hive.sh#hive.sh</file>

希望对大家有所帮助.

这篇关于Oozie shell 脚本操作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆