从 Google Cloud Composer 运行 docker 操作员 [英] Running docker operator from Google Cloud Composer

查看:20
本文介绍了从 Google Cloud Composer 运行 docker 操作员的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

至于文档,Google Cloud Composer 气流工作节点由专用的 kubernetes 集群提供服务:

As for the documentation, Google Cloud Composer airflow worker nodes are served from a dedicated kubernetes cluster:

我有一个包含 Docker 的 ETL 步骤,我想使用气流运行它,最好在托管 Workers 的同一个 Kubernetes 或专用集群上运行.

I have a Docker contained ETL step that I would like to run using airflow, preferably on the same Kubernetes that is hosting the Workers OR on a dedicated cluster.

从 Cloud Composer 气流环境启动 Docker Operation 的最佳实践是什么?

What would be the best practice for starting Docker Operation from Cloud Composer airflow environment?

务实的解决方案是❤️

推荐答案

Google Cloud Composer 最近刚刚发布到 General Availability,您现在可以使用 KubernetesPodOperator 将 Pod 启动到托管气流使用的 GKE 集群.

Google Cloud Composer has just recently released into General Availability, and with that you are now able to use a KubernetesPodOperator to launch pods into the same GKE cluster that the managed airflow uses.

确保您的 Composer 环境至少为 1.0.0

Make sure your Composer environment is at least 1.0.0

示例运算符:

import datetime

from airflow import models
from airflow.contrib.operators import kubernetes_pod_operator

with models.DAG(
    dag_id='composer_sample_kubernetes_pod',
    schedule_interval=datetime.timedelta(days=1),
    start_date=YESTERDAY) as dag:
# Only name, namespace, image, and task_id are required to create a
# KubernetesPodOperator. In Cloud Composer, currently the operator defaults
# to using the config file found at `/home/airflow/composer_kube_config if
# no `config_file` parameter is specified. By default it will contain the
# credentials for Cloud Composer's Google Kubernetes Engine cluster that is
# created upon environment creation.
kubernetes_min_pod = kubernetes_pod_operator.KubernetesPodOperator(
    # The ID specified for the task.
    task_id='pod-ex-minimum',
    # Name of task you want to run, used to generate Pod ID.
    name='pod-ex-minimum',
    # The namespace to run within Kubernetes, default namespace is
    # `default`. There is the potential for the resource starvation of
    # Airflow workers and scheduler within the Cloud Composer environment,
    # the recommended solution is to increase the amount of nodes in order
    # to satisfy the computing requirements. Alternatively, launching pods
    # into a custom namespace will stop fighting over resources.
    namespace='default',
    # Docker image specified. Defaults to hub.docker.com, but any fully
    # qualified URLs will point to a custom repository. Supports private
    # gcr.io images if the Composer Environment is under the same
    # project-id as the gcr.io images.
    image='gcr.io/gcp-runtimes/ubuntu_16_0_4')

其他资源:

这篇关于从 Google Cloud Composer 运行 docker 操作员的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆