Hadoop C ++ HDFS测试运行异常 [英] Hadoop C++ HDFS test running Exception

查看:1024
本文介绍了Hadoop C ++ HDFS测试运行异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Hadoop 2.2.0并尝试运行此 hdfs_test.cpp 应用程序:

I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp application:

#include "hdfs.h" 

int main(int argc, char **argv) {

    hdfsFS fs = hdfsConnect("default", 0);
    const char* writePath = "/tmp/testfile.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile) {
          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
          exit(-1);
    }
    char* buffer = "Hello, World!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) {
           fprintf(stderr, "Failed to 'flush' %s\n", writePath); 
          exit(-1);
    }
   hdfsCloseFile(fs, writeFile);
}

我编译它,但是当我用 ./ hdfs_test 我有这个:

I compiled it but when I'm running it with ./hdfs_test I have this:

loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!

也许是类路径的问题。
我的$ HADOOP_HOME是/ usr / local / hadoop,这是我的实际变量* CLASSPATH * :

Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop and this is my actually variable *CLASSPATH*:

echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar

任何帮助。感谢。

推荐答案

在类路径中使用基于JNI的程序。尝试使用 direct-jar-in-classpath 方法,例如在 https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3 ,我相信它应该工作。 https://github.com/QwertyManiac/cdh4-libhdfs-example目前正在工作。

I've faced problems with using wildcards in classpath when using JNI based programs. Try the direct-jar-in-classpath approach, such as the one generated in this sample code of mine at https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3, and I believe it should instead work. The whole contained example at https://github.com/QwertyManiac/cdh4-libhdfs-example does work presently.

另请参阅 http://stackoverflow.com/ a / 9322747/1660002

这篇关于Hadoop C ++ HDFS测试运行异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆