火花EC2 --ebs-VOL大小不工作 [英] spark-ec2 --ebs-vol-size not working

查看:206
本文介绍了火花EC2 --ebs-VOL大小不工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在启动与火花EC2 火花集群中, - EBS-体积尺寸标记出现没有任何效果。与 50 500 设置,然后ssh'ing到主节点, DF -h 显示有关空间10G的 /

When launching a spark cluster with spark-ec2, the --ebs-vol-size flag appears to have no effect. Setting it with 50 or 500 and then ssh'ing into the master node, a df -h shows about 10G of space on /.

如何使用火花EC2 以创建一个更大的EC2虚拟机?

How can I use spark-ec2 to create a larger EC2 virtual machine?

推荐答案

步骤有点复杂的名单已经低于提供给我工作 -

A bit elaborate list of steps have been provided below that worked for me -


  1. 启动一个与--ebs-VOL大小火花EC2集群

  2. 在./ephemeral-hdfs关机的hadoop

  1. Launch a spark-ec2 cluster with --ebs-vol-size
  2. Shutdown hadoop on ./ephemeral-hdfs

./短暂-HDFS /斌/ stop-all.sh

./ephemeral-hdfs/bin/stop-all.sh

开始在./persistent-hdfs的hadoop

Start hadoop on ./persistent-hdfs

./持久HDFS /斌/ start-all.sh

./persistent-hdfs/bin/start-all.sh

您可以验证目前的规模还没有反映所请求的EBS卷大小

You can verify that the current size has not reflected the requested ebs vol size

./持久HDFS /斌/ Hadoop的dfsadmin -report

./persistent-hdfs/bin/hadoop dfsadmin -report

运行以下命令(建议把它们放到一个脚本),并运行它 -

Run the following commands ( recommend to put them into a script ) and run it -

./持久HDFS /斌/ stop-all.sh

./persistent-hdfs/bin/stop-all.sh

SED -i的#体积/持久HDFS#VOL0 /持久HDFS#G'〜/持久HDFS / conf目录/核心的site.xml

sed -i 's#vol/persistent-hdfs#vol0/persistent-hdfs#g' ~/persistent-hdfs/conf/core-site.xml

./火花EC2 / copy-dir.sh〜/持久HDFS / conf目录/核心的site.xml

./spark-ec2/copy-dir.sh ~/persistent-hdfs/conf/core-site.xml

./火花EC2 / copy-dir.sh〜/持久HDFS / conf目录/ HDFS-site.xml中

./spark-ec2/copy-dir.sh ~/persistent-hdfs/conf/hdfs-site.xml

./持久HDFS /斌/ Hadoop的NameNode的-format

./persistent-hdfs/bin/hadoop namenode -format

./持久HDFS /斌/ start-all.sh

./persistent-hdfs/bin/start-all.sh

重复步骤4,以验证大小

Repeat Step 4 to verify size

信贷 -
通过brendancol

这篇关于火花EC2 --ebs-VOL大小不工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆