在运行hadoop示例时,遇到".staging/job_1541144755485_0002/job.splitmetainfo不存在",该怎么办? [英] When running hadoop example, I encountered ".staging/job_1541144755485_0002/job.splitmetainfo does not exist", what can I do?

查看:598
本文介绍了在运行hadoop示例时,遇到".staging/job_1541144755485_0002/job.splitmetainfo不存在",该怎么办?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的配置如下:
我在Hadoop实验中使用了两台机器,分别是pc720( 10.10.1.1 )和pc719( 10.10.1.2 ).
jdk(版本1.8.0_181)是由apt-get安装的. Hadoop2.7.1 是从 https:下载的: //archive.apache.org/dist/hadoop/common/hadoop-2.7.1/,然后放入/opt/

My configuration is below:
I used two machines for the Hadoop experiment, the pc720 (10.10.1.1) and the pc719 (10.10.1.2) respectively.
jdk(version 1.8.0_181) is installed by apt-get. Hadoop2.7.1 is download from https://archive.apache.org/dist/hadoop/common/hadoop-2.7.1/, and put in /opt/

第1步:
我配置了/etc/bash.bashrc,添加了

Step1:
I configured the /etc/bash.bashrc, adding

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64  
export PATH=${JAVA_HOME}/bin:${PATH}  
export HADOOP_HOME=/opt/hadoop-2.7.1  
export PATH=${HADOOP_HOME}/bin:${PATH}  
export PATH=${HADOOP_HOME}/sbin:${PATH}

然后运行"source /etc/profile"

第2步:
我配置了xmls:

Step2:
I configure the xmls:

从站:

10.10.1.2

Core-site.xml

Core-site.xml

<property>
        <name>fs.defautFS</name>
        <value>hdfs://10.10.1.1:9000</value>
</property>

<property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>file:/root/hadoop_store/tmp</value>
</property>

Hdfs-site.xml

Hdfs-site.xml

<property>
    <name>dfs:replication</name>
    <value>2</value>
</property>

<property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/root/hadoop_store/hdfs/namenode</value>
</property>

<property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/root/hadoop_store/hdfs/datanode</value>
</property>
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>10.10.1.1:9001</value>
</property>

<property>
        <name>dfs.namenode.rpc-address</name>
        <value>10.10.1.1:8080</value>
</property>
<property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
</property>

Mapred-site.xml

Mapred-site.xml

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

<property>
        <name>mapreduce.jobtracker.address</name>
        <value>10.10.1.1:9002</value>
</property>

<property>
        <name>mapreduce.jobhistory.address</name>
        <value>10.10.1.1:10020</value>
</property>

<property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>10.10.1.1:19888</value>
</property>

<property>
        <name>mapred.acls.enabled</name>
        <value>true</value>
</property>

Yarn-site.xml

Yarn-site.xml

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>

<property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>10.10.1.1</value>
</property>

<property>
        <name>yarn.resourcemanager.address</name>
        <value>${yarn.resourcemanager.hostname}:8032</value>
</property>

<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>${yarn.resourcemanager.hostname}:8030</value>
</property>

<property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>${yarn.resourcemanager.hostname}:8088</value>
</property>

<property>
        <name>yarn.resourcemanager.webapp.https.address</name>
        <value>${yarn.resourcemanager.hostname}:8090</value>
</property>

<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>${yarn.resourcemanager.hostname}:8031</value>
</property>

<property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>${yarn.resourcemanager.hostname}:8033</value>
</property>

<property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>8182</value>
</property>

<property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.1</value>
</property>
<property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
</property>

<property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>

第3步:
在/root/中,建立了几个目录:

Step3:
In /root/, established several directories:

mkdir hadoop_store
mkdir hadoop_store/hdfs
mkdir hadoop_store/tmp
mkdir hadoop_store/hdfs/datanode
mkdir hadoop_store/hdfs/namenode

然后我变成/opt/Hadoop-2.7.1/bin,运行

Then I turned into /opt/Hadoop-2.7.1/bin, run

./hdfs namenode –format
cd ..
cd sbin/
./start-all.sh
./mr-jobhistory-daemon.sh start historyserver

运行jps后,pc720显示

After run jps, pc720 shows

pc719显示

到这里,我认为我的hadoop2.7.1已成功安装和配置.但是问题来了.

Reaching here, I think my hadoop2.7.1 was successfully installed and configured. But the problems came.

我上交了/opt/hadoop-2.7.1/share/hadoop/mapreduce/显示的内容

I turned in /opt/hadoop-2.7.1/share/hadoop/mapreduce/ which shows

然后我跑了hadoop jar hadoop-mapreduce-examples-2.7.1.jar pi 2 2

日志在

Number of Maps  = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
18/11/02 08:14:48 INFO client.RMProxy: Connecting to ResourceManager at /10.10.1.1:8032
18/11/02 08:14:48 INFO input.FileInputFormat: Total input paths to process : 2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: number of splits:2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541144755485_0002
18/11/02 08:14:49 INFO impl.YarnClientImpl: Submitted application application_1541144755485_0002
18/11/02 08:14:49 INFO mapreduce.Job: The url to track the job: http://node-0-link-0:8088/proxy/application_1541144755485_0002/
18/11/02 08:14:49 INFO mapreduce.Job: Running job: job_1541144755485_0002
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 running in uber mode : false
18/11/02 08:14:53 INFO mapreduce.Job:  map 0% reduce 0%
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 failed with state FAILED due to: Application application_1541144755485_0002 failed 2 times due to AM Container for appattempt_1541144755485_0002_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://node-0-link-0:8088/cluster/app/application_1541144755485_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Failing this attempt. Failing the application.
18/11/02 08:14:53 INFO mapreduce.Job: Counters: 0
Job Finished in 5.08 seconds
java.io.FileNotFoundException: File file:/opt/hadoop-2.7.1/share/hadoop/mapreduce/QuasiMonteCarlo_1541168087724_1532373667/out/reduce-out does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1752)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

我尝试了许多解决方案,但似乎毫无用处.这个问题使我困惑了大约1个星期.我的配置有任何错误吗?我能做些什么?请帮帮我.
谢谢!

I have tried many solutions, but seemed useless. This problem has confused me for about 1 week. Are there any mistakes in my configuration? What can I do? Please help me.
Thanks!

推荐答案

通过运行hadoop dfs -ls /tmp,它会显示

Found 17 items
drwxrwxrwt   - root root       4096 2018-11-01 22:33 /tmp/.ICE-unix
drwxrwxrwt   - root root       4096 2018-11-01 22:33 /tmp/.Test-unix
drwxrwxrwt   - root root       4096 2018-11-01 22:33 /tmp/.X11-unix
drwxrwxrwt   - root root       4096 2018-11-01 22:33 /tmp/.XIM-unix
drwxrwxrwt   - root root       4096 2018-11-01 22:33 /tmp/.font-unix
drwxr-xr-x   - root root       4096 2018-11-04 23:45 /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08
drwxr-xr-x   - root root       4096 2018-11-04 23:45 /tmp/Jetty_10_10_1_1_8088_cluster____.kglkoh
drwxr-xr-x   - root root       4096 2018-11-04 23:46 /tmp/Jetty_node.0.link.0_19888_jobhistory____.ckp60y
drwxr-xr-x   - root root       4096 2018-11-04 23:45 /tmp/Jetty_node.0.link.0_9001_secondary____.h648yx
drwxr-xr-x   - root root       4096 2018-11-02 01:13 /tmp/hadoop-root
-rw-r--r--   1 root root          6 2018-11-04 23:45 /tmp/hadoop-root-namenode.pid
-rw-r--r--   1 root root          6 2018-11-04 23:45 /tmp/hadoop-root-secondarynamenode.pid
drwxr-xr-x   - root root       4096 2018-11-02 01:14 /tmp/hadoop-yarn
drwxr-xr-x   - root root       4096 2018-11-05 00:02 /tmp/hsperfdata_root
-rw-r--r--   1 root root          6 2018-11-04 23:46 /tmp/mapred-root-historyserver.pid
-rw-r--r--   1 root root        388 2018-11-01 22:33 /tmp/ntp.conf.new
-rw-r--r--   1 root root          6 

这篇关于在运行hadoop示例时,遇到".staging/job_1541144755485_0002/job.splitmetainfo不存在",该怎么办?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆