无法通过Java客户端获取Hadoop作业信息 [英] Unable to get Hadoop job information through java client

查看:127
本文介绍了无法通过Java客户端获取Hadoop作业信息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用Hadoop 1.2.1并尝试通过java客户端打印Job详细信息,但它不打印任何内容,这是我的Java代码

Am using Hadoop 1.2.1 and trying to print Job details through java client but it is not printing anything, here is my java code

    Configuration configuration = new Configuration();
    configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/hdfs-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/mapred-site.xml")); 
    InetSocketAddress jobtracker = new InetSocketAddress("localhost", 54311);
    JobClient jobClient;
    jobClient = new JobClient(jobtracker, configuration);
    jobClient.setConf(configuration);
    JobStatus[] jobs = jobClient.getAllJobs();
    System.out.println(jobs.length);//it is printing 0.
    for (int i = 0; i < jobs.length; i++) {
        JobStatus js = jobs[i];
        JobID jobId = js.getJobID();
        System.out.println(jobId);
    }

但是从工作追踪历史中我可以看到三个工作。这里是屏幕截图

任何机构都可以告诉我在哪里出错了。我只是想打印所有的工作细节。

But from job tracker history i can see three jobs. Here is the screen shot Can any body tell me where am going wrong. I just want to print all the job details.

这里是我的配置文件:

core-site.xml

core-site.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name
<value>/data/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose</description>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.  The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
</description>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>


推荐答案

jobClient.displayTasks(jobID, "map", "completed");

其中工作ID是

where job ID is

JobID jobID = new JobID(jobIdentifier, jobNumber);

TaskReport[] taskReportList =   jobClient.getMapTaskReports(jobID);

这篇关于无法通过Java客户端获取Hadoop作业信息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆