在Windows上运行Hadoop 2.6.0上的Map reduce时出错 [英] Error while running Map reduce on Hadoop 2.6.0 on Windows

查看:160
本文介绍了在Windows上运行Hadoop 2.6.0上的Map reduce时出错的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用本教程在Windows 8.1上设置了一个节点Hadoop 2.6.0群集 - :

  JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSumsByteArray 
JNIEnv * env,jclass clazz,
jint bytes_per_checksum,jint j_crc_type,
jarray j_sums,jint sums_offset,
jarray j_data,jint data_offset,jint data_len,
jstring j_filename,jlong​​ base_pos, jboolean验证)
{
...

不满意的链接表示您没有在%HADOOP_HOME%\bin中部署Hadoop.dll,或者该进程从某个地方加载了错误的dll否则。确保将正确的dll放在%HADOOP_HOME%\bin中,并确保这是已加载的(使用 /hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.javarel =nofollow> NativeCodeLoader code> 输出:

  private static boolean nativeCodeLoaded = false; 

static {
//尝试加载原生hadoop库并适当设置fallback标志
if(LOG.isDebugEnabled()){
LOG.debug(Trying加载定制的native-hadoop库......);
}
尝试{
System.loadLibrary(hadoop);
LOG.debug(Loaded the native-hadoop library);
nativeCodeLoaded = true;
} catch(Throwable t){
//如果(LOG.isDebugEnabled()){
LOG.debug(未能加载native-hadoop错误:+ t);
LOG.debug(java.library.path =+
System.getProperty(java.library.path));



if(!nativeCodeLoaded){
LOG.warn(无法为您的平台加载native-hadoop库...+
在适用的情况下使用内建的java类);
}

启用该组件的DEBUG级别,您应该看到 加载了native-hadoop库(因为你的代码就好像加载了hadoop.dll一样)。最可能的问题是加载错误的问题,因为在PATH中找到第一个。


I've setup a single node Hadoop 2.6.0 cluster on my Windows 8.1 using this tutorial - https://wiki.apache.org/hadoop/Hadoop2OnWindows.

All daemons are up and running. I'm able to access hdfs using hadoop fs -ls / but I've not loaded anything, so there is nothing to show up as of now.

But when I run a simple map reduce program, I get the below erorr :

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:400)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at wordcount.Wordcount.main(Wordcount.java:62)

Error from hadoop fs -put command :

Any advise would be of great help.

解决方案

org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray is part of hadoop.dll:

JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSumsByteArray
  (JNIEnv *env, jclass clazz,
    jint bytes_per_checksum, jint j_crc_type,
    jarray j_sums, jint sums_offset,
    jarray j_data, jint data_offset, jint data_len,
    jstring j_filename, jlong base_pos, jboolean verify)
{
  ...

Unsatisfied link would indicate that you did not deploy Hadoop.dll in %HADOOP_HOME%\bin or the process loaded a wrong dll from somewhere else. Make sure the correct dll is placed in %HADOOP_HOME%\bin, and make sure this is the one loaded (use process explorer)

You should also see in the log the NativeCodeLoader output:

 private static boolean nativeCodeLoaded = false;

  static {
    // Try to load native hadoop library and set fallback flag appropriately
    if(LOG.isDebugEnabled()) {
      LOG.debug("Trying to load the custom-built native-hadoop library...");
    }
    try {
      System.loadLibrary("hadoop");
      LOG.debug("Loaded the native-hadoop library");
      nativeCodeLoaded = true;
    } catch (Throwable t) {
      // Ignore failure to load
      if(LOG.isDebugEnabled()) {
        LOG.debug("Failed to load native-hadoop with error: " + t);
        LOG.debug("java.library.path=" +
            System.getProperty("java.library.path"));
      }
    }

    if (!nativeCodeLoaded) {
      LOG.warn("Unable to load native-hadoop library for your platform... " +
               "using builtin-java classes where applicable");
    }

Enable DEBUG level for this component and you should see "Loaded the native-hadoop library" (since your code acts as if the hadoop.dll was loaded). The most likely problem is that the wrong one is loaded because is found first in the PATH.

这篇关于在Windows上运行Hadoop 2.6.0上的Map reduce时出错的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆