我怎样才能增加hdfs容量 [英] how can I increase hdfs capacity

查看:190
本文介绍了我怎样才能增加hdfs容量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何将我的hadoop DFS的配置容量从默认的50GB增加到100GB?

我现在的设置是hadoop 1.2.1,运行在使用120GB 450GB的centOS6机器上。已经将hadoop设置为psudodistributed模式,使用Hadoop权威指南3中的/ conf建议的/ conf。
hdfs-site.xml只有一个配置属性:

 <配置> 
<属性>
<名称> dfs.replication< / name>
<值> 1< ; /值>
< / property>
< / configuration>

以下行没有提供错误反馈...回到提示符。

  hadoop dfsadmin -setSpaceQuota 100g / tmp / hadoop -myUserID 

如果我处于regen循环(已执行



  rm -rf / tmp / hadoop-myUserId 



尝试从头开始)setSpaceQuota的这种看起来成功发生iff-only-only-如果我已经执行了

  start-all.sh 
hadoop namenode -format

failu我的dfs容量配置的重新显示为

  hadoop dfsadmin -report 

显示了相同的50GB配置容量。



如果这是获得100GB hdfs配置容量的当前最佳方式,我会愿意切换到hadoop 2.2(现在是稳定版本)。
似乎应该有一个hdfs-site.xml的配置属性,它允许我使用更多的自由分区。

div>

将hdfs的位置设置为具有更多可用空间的分区。
对于hadoop-1.2.1,可以通过在
中设置hadoop.tmp.dir来完成hadoop-1.2.1 / conf / core-site.xml

 <?xml version =1.0?> 
<?xml-stylesheet type =text / xslhref =configuration.xsl?>

<! - 将特定于站点的属性覆盖到此文件中。 - >

<配置>
<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< / property>
<属性>
< name> hadoop.tmp.dir< / name>
<值> / home / myUserID / hdfs< /值>
< description>其他hdfs目录的基本位置。< / description>
< / property>
< / configuration>

跑步

df



曾经说我的_home分区是我的硬盘,我的/

(_root)分区减去50GB 。 hdfs的默认位置是
/ tmp / hadoop-myUserId
,它位于/分区中。这是我最初的50GB hdfs大小来自的地方。



创建和确认hdfs目录的分区位置由

  mkdir〜/ hdfs 
df -P〜/ hdfs |尾-1 | cut -d''-f 1

成功实施由
完成

  stop-all.sh 
start-dfs.sh
hadoop namenode -format
start-all.sh
hadoop dfsadmin -report

它将hdfs的大小报告为我的_home分区的大小。 / p>

谢谢jtravaglini提供评论/线索。


How can I increase the configured capacity of my hadoop DFS from the default 50GB to 100GB?

My present setup is hadoop 1.2.1 running on a centOS6 machine with 120GB of 450GB used. Have set up hadoop to be in psudodistributed mode with the /conf suggested by "Hadoop the Definitive Guide 3'rd). hdfs-site.xml had only one configured property:

   <configuration>
    <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
 </configuration>

The following line gave no error feedback... comes back to the prompt.

hadoop dfsadmin -setSpaceQuota 100g  /tmp/hadoop-myUserID

If I am in a regen loop (have executed

 rm -rf /tmp/hadoop-myUserId  

in a attempt to "start from scratch") This seeming success of the setSpaceQuota occurs iff-and-only-if I have executed

  start-all.sh
  hadoop namenode -format

The failure of my dfs capacity configuration is shown by

 hadoop dfsadmin -report

which shows the same 50GB of configured capacity.

I would be willing to switch over to hadoop 2.2 (now stable release) if that is the current best way to get 100GB hdfs configured capacity. Seems like there should be a configuration property for hdfs-site.xml which would allow me to use more of my free partition.

解决方案

Set the location of the hdfs to a partition with more free space. For hadoop-1.2.1 this can be done by setting the hadoop.tmp.dir in hadoop-1.2.1/conf/core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
   <property>
      <name>fs.default.name</name>
     <value>hdfs://localhost:9000</value>
     </property>
   <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/myUserID/hdfs</value>
    <description>base location for other hdfs directories.</description>
   </property>
</configuration>

Running

df

had said my _home partition was my hard disk, minus 50GB for my /
( _root) partition. The default location for hdfs is /tmp/hadoop-myUserId which is in the / partition. This is where my initial 50GB hdfs size came from.

Creation and confirmation of the partition location of a directory for the hdfs was accomplished by

mkdir ~/hdfs
df -P ~/hdfs | tail -1 | cut -d' ' -f 1

successful implementation was accomplished by

stop-all.sh
start-dfs.sh
hadoop namenode -format
start-all.sh
hadoop dfsadmin -report

which reports the size of the hdfs as the size of my _home partition.

Thank you jtravaglini for the comment/clue.

这篇关于我怎样才能增加hdfs容量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆