如何保持Google Dataproc master运行? [英] How to keep Google Dataproc master running?

查看:94
本文介绍了如何保持Google Dataproc master运行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Dataproc上创建了一个集群,并且效果很好.但是,在群集闲置一段时间(约90分钟)后,主节点将自动停止.我创建的每个群集都会发生这种情况.我在这里看到类似的问题:继续运行Dataproc Master节点

似乎是初始化操作问题.但是,该帖子没有为我提供足够的信息来解决此问题.以下是我用于创建集群的命令:

 gcloud dataproc clusters create $CLUSTER_NAME \
    --project $PROJECT \
    --bucket $BUCKET \
    --region $REGION \
    --zone $ZONE \
    --master-machine-type $MASTER_MACHINE_TYPE \
    --master-boot-disk-size $MASTER_DISK_SIZE \
    --worker-boot-disk-size $WORKER_DISK_SIZE \
    --num-workers=$NUM_WORKERS \
    --initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh \
    --metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
    --metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
    --scopes cloud-platform \
    --metadata JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn \
    --optional-components=ANACONDA,JUPYTER \
    --image-version=1.3
 

我的集群需要BigQuery连接器,GCS连接器,Jupyter和DataLab.

如何使我的主节点保持运行状态?谢谢.

解决方案

正如评论线程中所总结的,这确实是由Datalab的

  • 按照yelsayed的建议编辑初始化操作以设置环境变量:

    function run_datalab(){
      if docker run -d --restart always --net=host -e "DATALAB_DISABLE_IDLE_TIMEOUT_PROCESS=true" \
          -v "${DATALAB_DIR}:/content/datalab" ${VOLUME_FLAGS} datalab-pyspark; then
        echo 'Cloud Datalab Jupyter server successfully deployed.'
      else
        err 'Failed to run Cloud Datalab'
      fi
    }
    

  • 并使用您的自定义初始化操作代替库存gs://dataproc-initialization-actions之一.也可能值得在github仓库中针对dataproc初始化操作提出一个跟踪问题,建议默认情况下禁用超时或提供简单的基于元数据的选项.可能是真的,自动关闭行为与Dataproc群集的默认用法不同,因为主服务器还执行除运行Datalab服务以外的其他角色.

    I created a cluster on Dataproc and it works great. However, after the cluster is idle for a while (~90 min), the master node will automatically stops. This happens to every cluster I created. I see there is a similar question here: Keep running Dataproc Master node

    It looks like it's the initialization action problem. However the post does not give me enough info to fix the issue. Below are the commands I used to create the cluster:

    gcloud dataproc clusters create $CLUSTER_NAME \
        --project $PROJECT \
        --bucket $BUCKET \
        --region $REGION \
        --zone $ZONE \
        --master-machine-type $MASTER_MACHINE_TYPE \
        --master-boot-disk-size $MASTER_DISK_SIZE \
        --worker-boot-disk-size $WORKER_DISK_SIZE \
        --num-workers=$NUM_WORKERS \
        --initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh \
        --metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
        --metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
        --scopes cloud-platform \
        --metadata JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn \
        --optional-components=ANACONDA,JUPYTER \
        --image-version=1.3
    

    I need the BigQuery connector, GCS connector, Jupyter and DataLab for my cluster.

    How can I keep my master node running? Thank you.

    解决方案

    As summarized in the comment thread, this is indeed caused by Datalab's auto-shutdown feature. There are a couple ways to change this behavior:

    1. Upon first creating the Datalab-enabled Dataproc cluster, log in to Datalab and click on the "Idle timeout in about ..." text to disable it: https://cloud.google.com/datalab/docs/concepts/auto-shutdown#disabling_the_auto_shutdown_timer - The text will change to "Idle timeout is disabled"
    2. Edit the initialization action to set the environment variable as suggested by yelsayed:

      function run_datalab(){
        if docker run -d --restart always --net=host -e "DATALAB_DISABLE_IDLE_TIMEOUT_PROCESS=true" \
            -v "${DATALAB_DIR}:/content/datalab" ${VOLUME_FLAGS} datalab-pyspark; then
          echo 'Cloud Datalab Jupyter server successfully deployed.'
        else
          err 'Failed to run Cloud Datalab'
        fi
      }
      

    And use your custom initialization action instead of the stock gs://dataproc-initialization-actions one. It could be worth filing a tracking issue in the github repo for dataproc initialization actions too, suggesting to disable the timeout by default or provide an easy metadata-based option. It's probably true that the auto-shutdown behavior isn't as expected in default usage on a Dataproc cluster since the master is also performing roles other than running the Datalab service.

    这篇关于如何保持Google Dataproc master运行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆