Windows上的Hadoop错误:java.lang.UnsatisfiedLinkError [英] Hadoop error on Windows : java.lang.UnsatisfiedLinkError

查看:3070
本文介绍了Windows上的Hadoop错误:java.lang.UnsatisfiedLinkError的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是Hadoop的新手,试图执行我的第一个wordcount的mapreduce作业。
然而,每当我尝试这样做,我得到以下错误:

pre $ java.lang.UnsatisfiedLinkError :org.apache.hadoop.util.NativeCrc32.nativeCompute
ChunkedSumsByteArray(II [BI [BIILjava /郎/字符串; JZ)V
。在org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(
本地方法)
在org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(那提
veCrc32.java:86)
在org.apache.hadoop.util.DataChecksum.calculateChunkedSums( DataChecksum
.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme
r.java:202)
at org.apache.hadoop.fs .FSOutputSummer.flushBuffer(FSOutputSummer.java:1
63)
。在org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1
44)
。在有机.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:221
7)
at org.apache.hadoop.fs.FSDataOutputStream $ PositionCache.close(FSDataOut
putStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java
:106)
at org.apache .hadoop.io.IOUtils.cleanup(IOUtils.java:237)
at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:254)
at org.apache.hadoop.io .IOUtils.copyBytes(IOUtils.java:61)
在org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
在org.apache.hadoop.fs.FileUtil.copy (FileUtil.java:366)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java :190
5)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:187
3)
at org.apache.hadoop.fs.FileSystem .copyFromLocalFile(FileSystem.java:183
8)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyJar(JobResourceUp
loader.java:246)
at org.apache 。H adoop.mapreduce.JobResourceUploader.uploadFiles(JobResour
ceUploader.java:166)
在org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSub
mitter.java:98)
在org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitt
er.java:191)
at org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1297)
at org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1294)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth。 Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1656)
在org.apache.hadoop.mapreduce .Job.submit(Job.java:1294)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)
at org.apache.hadoop.examples.WordCount.main (WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl。
java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java :606)
at org.apache.hadoop.util.ProgramDriver $ ProgramDescription.invoke(Progra
mDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run( ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
的java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
在java.lang中.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar .main(RunJar.java:136)

另外,当我做的时候



hadoop checknative -a

它向我显示以下详细信息:

 本地库检查:
hadoop:true C:\ haddoop-2.6.1 \ bin\hadoop.dll
zlib:false
snappy :false
lz4:true revision:43
bzip2:false
openssl:false org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
winutils:true C:\\ \\ hadoop-2.6.1\bin\winutils.exe
15/10/19 15:18:24 INFO util.ExitUtil:以状态1退出

有什么办法可以解决这个问题吗?

解决方案

我有同样的问题。从 system32 目录中删除 hadoop.dll 并设置 HADOOP_HOME 使用它的环境变量。



或者,可以添加一个jvm参数,如 -Djava.library.path =< hadoop home> / lib / native


I am new to Hadoop and trying to execute my first mapreduce job of wordcount. However, whenever I am trying to do it, I am getting the below error:

java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeCompute
ChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
        at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(
Native Method)
        at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(Nati
veCrc32.java:86)
        at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum
.java:430)
        at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme
r.java:202)
        at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1
63)
        at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1
44)
        at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:221
7)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOut
putStream.java:72)
        at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java
:106)
        at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
        at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:254)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:190
5)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:187
3)
        at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:183
8)
        at org.apache.hadoop.mapreduce.JobResourceUploader.copyJar(JobResourceUp
loader.java:246)
        at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResour
ceUploader.java:166)
        at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSub
mitter.java:98)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitt
er.java:191)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1297)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1294)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1656)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1294)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Progra
mDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Also, when I do

hadoop checknative -a

it shows me following details:

Native library checking:
hadoop:  true C:\hadoop-2.6.1\bin\hadoop.dll
zlib:    false
snappy:  false
lz4:     true revision:43
bzip2:   false
openssl: false org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
winutils: true C:\hadoop-2.6.1\bin\winutils.exe
15/10/19 15:18:24 INFO util.ExitUtil: Exiting with status 1

Is there any way I can resolve this issue ?

解决方案

I had same problem. after removing hadoop.dll from the system32 directory and setting the HADOOP_HOME environment variable it worked.

Alternatively use can add a jvm argument like -Djava.library.path=<hadoop home>/lib/native.

这篇关于Windows上的Hadoop错误:java.lang.UnsatisfiedLinkError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆