如何配置多节点Apache Storm集群 [英] How to configure multi-node Apache Storm cluster

查看:28
本文介绍了如何配置多节点Apache Storm集群的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在关注 http://jayatiatblogs.blogspot.com/2011/11/storm-installation.html &http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup 在 AWS EC2 的 Ubuntu 14.04 LTS 中设置 Apache Storm 集群.

I'm following http://jayatiatblogs.blogspot.com/2011/11/storm-installation.html & http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup to set up Apache Storm cluster in Ubuntu 14.04 LTS at AWS EC2.

我的主节点是 10.0.0.185.我的从节点是 10.0.0.79, 10.0.0.124 &10.0.0.84,其动物园管理员数据中的 myid 分别为 1、2 和 3.我建立了一个由所有 3 个从节点组成的 Apache Zookeeper 集成.

My master node is 10.0.0.185. My slave nodes are 10.0.0.79, 10.0.0.124 & 10.0.0.84 with myid of 1, 2 and 3 in their zookeeper-data respectively. I set up an ensemble of Apache Zookeeper consists of all the 3 slave nodes.

下面是我的从属节点的zoo.cfg:

Below are my zoo.cfg for my slave nodes:

tickTime=2000
initLimit=10
syncLimit=5

dataDir=/home/ubuntu/zookeeper-data
clientPort=2181

server.1=10.0.0.79:2888:3888
server.2=10.0.0.124:2888:3888
server.3=10.0.0.84:2888:3888

autopurge.snapRetainCount=3
autopurge.purgeInterval=1

以下是我的从属节点的storm.yaml:

Below are my storm.yaml for my slave nodes:

########### These MUST be filled in for a storm configuration
 storm.zookeeper.server:
     - "10.0.0.79"
     - "10.0.0.124"
     - "10.0.0.84"
#     - "localhost"
 storm.zookeeper.port: 2181

# nimbus.host: "localhost"
 nimbus.host: "10.0.0.185"

 storm.local.dir: "/home/ubuntu/storm/data"
 java.library.path: "/usr/lib/jvm/java-7-oracle"

 supervisor.slots.ports:
     - 6700
     - 6701
     - 6702
     - 6703
     - 6704
#
# worker.childopts: "-Xmx768m"
# nimbus.childopts: "-Xmx512m"
# supervisor.childopts: "-Xmx256m"
#
# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
#     - org.mycompany.MyType
#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
#     - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
#     - "server1"
#     - "server2"

## Metrics Consumers
# topology.metrics.consumer.register:
#   - class: "backtype.storm.metric.LoggingMetricsConsumer"
#     parallelism.hint: 1
#   - class: "org.mycompany.MyMetricsConsumer"
#     parallelism.hint: 1
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"

以下是我的节点的storm.yaml:

Below are the storm.yaml for my master node:

########### These MUST be filled in for a storm configuration
 storm.zookeeper.servers:
     - "10.0.0.79"
     - "10.0.0.124"
     - "10.0.0.84"
#     - "localhost"
#
 storm.zookeeper.port: 2181

 nimbus.host: "10.0.0.185"
# nimbus.thrift.port: 6627
# nimbus.task.launch.secs: 240

# supervisor.worker.start.timeout.secs: 240
# supervisor.worker.timeout.secs: 240

 ui.port: 8772

#  nimbus.childopts: "‐Xmx1024m ‐Djava.net.preferIPv4Stack=true"

#  ui.childopts: "‐Xmx768m ‐Djava.net.preferIPv4Stack=true"
#  supervisor.childopts: "‐Djava.net.preferIPv4Stack=true"
#  worker.childopts: "‐Xmx768m ‐Djava.net.preferIPv4Stack=true"

 storm.local.dir: "/home/ubuntu/storm/data"

 java.library.path: "/usr/lib/jvm/java-7-oracle"

# supervisor.slots.ports:
#     - 6700
#     - 6701
#     - 6702
#     - 6703
#     - 6704

# worker.childopts: "-Xmx768m"
# nimbus.childopts: "-Xmx512m"
# supervisor.childopts: "-Xmx256m"

# ##### These may optionally be filled in:
#
## List of custom serializations
# topology.kryo.register:
#     - org.mycompany.MyType
#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
#     - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
#     - "server1"
#     - "server2"

## Metrics Consumers
# topology.metrics.consumer.register:
#   - class: "backtype.storm.metric.LoggingMetricsConsumer"
#     parallelism.hint: 1
#   - class: "org.mycompany.MyMetricsConsumer"
#     parallelism.hint: 1
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"

我在所有从节点上启动我的zookeeper,然后在我的主节点上启动我的storm nimbus,然后在我所有的从节点上启动storm supervisor.但是,当我在我的 Storm UI 中查看时,集群摘要中只有 1 个主管,总共 5 个插槽.主管摘要中只有 1 个主管信息,为什么会这样?

I start my zookeeper in all my slave nodes, then start my storm nimbus in my master node, then start storm supervisor in all my slave nodes. However, when I view in my Storm UI, there is only 1 supervisor with total 5 slots in the cluster summary & only 1 supervisor information in the supervisor summary, why so?

如果我在这种情况下提交拓扑,实际上有多少从节点在工作?

How many slave nodes is actually working if I submit a topology in this case?

为什么不是 3 个主管,总共 15 个插槽?

Why it is not 3 supervisors with total 15 slots?

我应该怎么做才能拥有 3 个主管?

What should I do in order to have 3 supervisors?

在slave节点查看supervisor.log,原因如下:

When I check in the supervisor.log in the slave nodes, the causes is as below:

2015-05-29T09:21:24.185+0000 b.s.d.supervisor [INFO] 5019754f-cae1-4000-beb4-fa0
16bd1a43d still hasn't started

推荐答案

你所做的一切都很完美,而且效果也很好.

What you are doing perfect and its works too.

你唯一应该改变的是你的storm.dir.从属节点也是如此,主节点只是改变了 nimbus & 中 storm.dir 路径中的路径.主管节点(不要使用相同的本地路径).当您使用相同的本地路径时,nimbus 和主管共享相同的 ID.他们开始发挥作用,但你没有看到 8 个插槽,他们只是向你显示了 4 个作为工人的插槽.

The only thing you should change is your storm.dir. It is same in the slave and the master nodes just change the path in the storm.dir path in nimbus & supervisor nodes (don't use same local path). When you use same local path the nimbus and supervisor share same id. They come into play but you don’t see 8 slots they just show you 4 slots as workers.

更改 (storm.local.dir:/home/ubuntu/storm/data) 并且不要在 supervisor 和 nimbus 中使用相同的路径.

Change the (storm.local.dir:/home/ubuntu/storm/data) and don`t use same path in supervisor and nimbus.

这篇关于如何配置多节点Apache Storm集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆