Hadoop 2.2.0中HDFS的配置文件在哪里? [英] Where is the configuration file for HDFS in Hadoop 2.2.0?

查看:2034
本文介绍了Hadoop 2.2.0中HDFS的配置文件在哪里?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习Hadoop,目前我正在设置一个Hadoop 2.2.0单节点。我下载最新的发行版,解压缩它,现在我试图设置Hadoop分布式文件系统(HDFS)。



现在,我试图遵循可用的Hadoop指令这里,但我很迷失。



在左侧栏中,引用以下文件:




  • core-default.xml

  • hdfs-default.xml

  • mapred-default.xml

  • yarn-default.xml


$ b b

但是这些文件是怎么回事?



我找到了/etc/hadoop/hdfs-site.xml,但是它是空的!



/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml但它只是一块doc!



所以,什么文件我要修改配置HDFS吗?

解决方案

div>

这些文件都在hadoop / conf目录中。



要设置HDFS,你需要配置core-site.xml和hdfs-site.xml 。



HDFS以两种模式工作:分布式(多节点群集)和伪分布式(单个机器的群集)。



对于伪分布式模式,您必须配置:



在core-site.xml中:

 <! -  namenode  - > 
< property>
< name> fs.default.name< / name>
< value> hdfs:// localhost:8020< / value>
< / property>

在hdfs-site.xml中:

 <  -  HDFS的存储目录 -  hadoop.tmp.dir属性,其默认值为/tmp/hadoop-${user.name}  - > 
< property>
< name> hadoop.tmp.dir< / name>
< value> / your-dir /< / value>
< / property>

每个属性都有其硬编码的默认值。



请务必在启动HDFS之前为hadoop用户设置ssh无密码登录。



PS



您从Apache下载Hadoop,您可以考虑切换到Hadoop发行版:



Cloudera的CDH ,HortonWorks或MapR。



如果安装Cloudera CDH或Hortonworks HDP你会发现/ etc / hadoop / conf /.


中的文件

I'm studying Hadoop and currently I'm trying to set up an Hadoop 2.2.0 single node. I downloaded the latest distribution, uncompressed it, now I'm trying to set up the Hadoop Distributed File System (HDFS).

Now, I'm trying to follow the Hadoop instructions available here but I'm quite lost.

In the left bar you see there are references to the following files:

  • core-default.xml
  • hdfs-default.xml
  • mapred-default.xml
  • yarn-default.xml

But how those files are ?

I found /etc/hadoop/hdfs-site.xml, but it is empty!

I found /share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml but it is just a piece of doc!

So, what files I have to modify to configure HDFS ? Where the deaults values are read from ?

Thanks in advance for your help.

解决方案

These files are all found in the hadoop/conf directory.

For setting HDFS you have to configure core-site.xml and hdfs-site.xml.

HDFS works in two modes: distributed (multi-node cluster) and pseudo-distributed (cluster of one single machine).

For the pseudo-distributed mode you have to configure:

In core-site.xml:

<!-- namenode -->
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:8020</value>
</property>

In hdfs-site.xml:

<-- storage directories for HDFS - the hadoop.tmp.dir property, whose default is /tmp/hadoop-${user.name} -->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/your-dir/</value>
</property>

Each property has its hardcoded default value.

Please remember to set ssh password-less login for hadoop user before starting HDFS.

P.S.

It you download Hadoop from Apache, you can consider switching to a Hadoop distribution:

Cloudera's CDH, HortonWorks or MapR.

If you install Cloudera CDH or Hortonworks HDP you will find the files in /etc/hadoop/conf/.

这篇关于Hadoop 2.2.0中HDFS的配置文件在哪里?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆