hadoop hdfs指向file:///而不是hdfs:// [英] hadoop hdfs points to file:/// not hdfs://

查看:135
本文介绍了hadoop hdfs指向file:///而不是hdfs://的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,我在CentOS 5上通过Cloudera Manager cdh3u5安装了Hadoop.

So I installed Hadoop via Cloudera Manager cdh3u5 on CentOS 5. When I run cmd

hadoop fs -ls/

我希望看到hdfs://localhost.localdomain:8020/的内容

I expected to see the contents of hdfs://localhost.localdomain:8020/

但是,它已返回 file:///

现在,不用说,我可以通过

Now, this goes without saying that I can access my hdfs:// through

hadoop fs -ls hdfs://localhost.localdomain:8020/

但是在安装诸如Accumulo之类的其他应用程序时,Accumulo会自动检测file:///

But when it came to installing other applications such as Accumulo, accumulo would automatically detect Hadoop Filesystem in file:///

问题是,有没有人遇到过这个问题,您如何解决?

Question is, has anyone ran into this issue and how did you resolve it?

我查看了 HDFS旧服务器返回本地FS而不是HDFS 的内容,这是一个类似的问题,但没有解决此问题.另外,Cloudera Manager cdh4并没有出现此问题.

I had a look at HDFS thrift server returns content of local FS, not HDFS , which was a similar issue, but did not solve this issue. Also, I do not get this issue with Cloudera Manager cdh4.

推荐答案

默认情况下,Hadoop将使用本地模式.您可能需要在 $ HADOOP_HOME/conf/core-site.xml <中将 fs.default.name 设置为 hdfs://localhost.localdomain:8020//code>.

By default, Hadoop is going to use local mode. You probably need to set fs.default.name to hdfs://localhost.localdomain:8020/ in $HADOOP_HOME/conf/core-site.xml.

为此,请将其添加到 core-site.xml :

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost.localdomain:8020/</value>
</property>

Accumulo感到困惑的原因是因为它使用相同的默认配置来确定HDFS的位置...并且默认为 file://

The reason why Accumulo is confused is because it's using the same default configuration to figure out where HDFS is... and it's defaulting to file://

这篇关于hadoop hdfs指向file:///而不是hdfs://的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆