我如何动态升级dataproc中的工作者的CPU / RAM /磁盘? [英] How I dynamically upgrade worker's cpu/ram/disk in dataproc?

查看:247
本文介绍了我如何动态升级dataproc中的工作者的CPU / RAM /磁盘?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在google dataproc中默认设置了一个集群(4个vCPU,15GB Ram)。
在完成几项猪作业后,群集有2-3个不健康的节点。
所以我升级了工作虚拟机的vCPU(4到8个vCPU),Ram(15GB到30GB)和磁盘。
但是在Hadoop Web界面中显示工作节点的硬件没有变化,但它仍然显示了原始的vCPU / Ram / Disk挂载。

I created a cluster by default setting(4 vCPUs, 15GB Ram) in google dataproc. After working several pig jobs, the cluster had 2-3 unhealthy node. So I upgraded the worker VM's vCPUs(4 to 8 vCPUs), Ram(15GB to 30GB) and Disk. But in the Hadoop Web interface showed the hardware of worker node didn't change, it still showed the original mounts of vCPU/Ram/Disk.

我可以动态升级dataproc中的cpu / ram / disk吗?

How can I dynamically upgrade worker's cpu/ram/disk in dataproc?

谢谢。

推荐答案

Dataproc不支持升级运行群集上的工作人员。要升级,我们建议重新创建群集。您还可以通过cluster update gcloud命令添加额外的工作人员。

Dataproc has no support for upgrading workers on running clusters. To upgrade, we suggest recreating the cluster. You can also add extra workers via clusters update gcloud command.

可以通过停止每个工人实例升级和重新启动来升级工作人员类型。但是,有一些hadoop / spark属性必须更改以适应不同的容器大小。

It is possible to upgrade worker type by stopping each worker instance, upgrading and restarting. However, there are a number of hadoop/spark properties that have to change to accommodate different container sizes.

这篇关于我如何动态升级dataproc中的工作者的CPU / RAM /磁盘?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆