Terraform:如何在具有名称空间的Google Cloud(GKE)上创建Kubernetes集群? [英] Terraform: How to create a Kubernetes cluster on Google Cloud (GKE) with namespaces?

查看:122
本文介绍了Terraform:如何在具有名称空间的Google Cloud(GKE)上创建Kubernetes集群?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找可以执行以下操作的示例:

I'm after an example that would do the following:

  1. 通过Terraform的 google_container_cluster 在GKE上创建Kubernetes集群a>
  2. ...,并继续在其中创建名称空间,我想通过 kubernetes_namespace

我不确定的是如何连接新创建的集群和名称空间定义.例如,添加google_container_node_pool时,我可以做类似cluster = "${google_container_cluster.hosting.name}"的操作,但是我看不到kubernetes_namespace的任何相似内容.

The thing I'm not sure about is how to connect the newly created cluster and the namespace definition. For example, when adding google_container_node_pool, I can do something like cluster = "${google_container_cluster.hosting.name}" but I don't see anything similar for kubernetes_namespace.

推荐答案

从理论上讲,可以引用K8S(或任何其他)提供者中GCP提供者的资源,就像引用其中的资源或数据源一样单个提供程序的上下文.

In theory it is possible to reference resources from the GCP provider in K8S (or any other) provider in the same way you'd reference resources or data sources within the context of a single provider.

provider "google" {
  region = "us-west1"
}

data "google_compute_zones" "available" {}

resource "google_container_cluster" "primary" {
  name = "the-only-marcellus-wallace"
  zone = "${data.google_compute_zones.available.names[0]}"
  initial_node_count = 3

  additional_zones = [
    "${data.google_compute_zones.available.names[1]}"
  ]

  master_auth {
    username = "mr.yoda"
    password = "adoy.rm"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

provider "kubernetes" {
  host = "https://${google_container_cluster.primary.endpoint}"
  username = "${google_container_cluster.primary.master_auth.0.username}"
  password = "${google_container_cluster.primary.master_auth.0.password}"
  client_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
  client_key = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
  cluster_ca_certificate = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}

resource "kubernetes_namespace" "n" {
  metadata {
    name = "blablah"
  }
}

但是实际上,由于已知的核心错误打破了交叉提供者的依赖性,它可能无法按预期运行,请参见 https分别是://://github.com/hashicorp/terraform/issues/4149 .

However in practice it may not work as expected due to a known core bug breaking cross-provider dependencies, see https://github.com/hashicorp/terraform/issues/12393 and https://github.com/hashicorp/terraform/issues/4149 respectively.

替代解决方案是:

  1. 使用两阶段应用和GKE集群目标首先,然后是其他依赖它的内容,例如terraform apply -target=google_container_cluster.primary然后是terraform apply
  2. 从K8S配置中分离出GKE集群配置,为它们提供完全隔离的工作流程,并通过远程状态.
  1. Use 2-staged apply and target the GKE cluster first, then anything else that depends on it, i.e. terraform apply -target=google_container_cluster.primary and then terraform apply
  2. Separate out GKE cluster config from K8S configs, give them completely isolated workflow and connect those via remote state.

/terraform-gke/main.tf

/terraform-gke/main.tf

terraform {
  backend "gcs" {
    bucket  = "tf-state-prod"
    prefix  = "terraform/state"
  }
}

provider "google" {
  region = "us-west1"
}

data "google_compute_zones" "available" {}

resource "google_container_cluster" "primary" {
  name = "the-only-marcellus-wallace"
  zone = "${data.google_compute_zones.available.names[0]}"
  initial_node_count = 3

  additional_zones = [
    "${data.google_compute_zones.available.names[1]}"
  ]

  master_auth {
    username = "mr.yoda"
    password = "adoy.rm"
  }

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

output "gke_host" {
  value = "https://${google_container_cluster.primary.endpoint}"
}

output "gke_username" {
  value = "${google_container_cluster.primary.master_auth.0.username}"
}

output "gke_password" {
  value = "${google_container_cluster.primary.master_auth.0.password}"
}

output "gke_client_certificate" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.client_certificate)}"
}

output "gke_client_key" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.client_key)}"
}

output "gke_cluster_ca_certificate" {
  value = "${base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)}"
}

在这里,我们通过output公开所有必要的配置,并使用后端将状态以及这些输出存储在远程位置

Here we're exposing all the necessary configuration via outputs and use backend to store the state, along with these outputs in a remote location, GCS in this case. This enables us to reference it in the config below.

/terraform-k8s/main.tf

/terraform-k8s/main.tf

data "terraform_remote_state" "foo" {
  backend = "gcs"
  config {
    bucket  = "tf-state-prod"
    prefix  = "terraform/state"
  }
}

provider "kubernetes" {
  host = "https://${data.terraform_remote_state.foo.gke_host}"
  username = "${data.terraform_remote_state.foo.gke_username}"
  password = "${data.terraform_remote_state.foo.gke_password}"
  client_certificate = "${base64decode(data.terraform_remote_state.foo.gke_client_certificate)}"
  client_key = "${base64decode(data.terraform_remote_state.foo.gke_client_key)}"
  cluster_ca_certificate = "${base64decode(data.terraform_remote_state.foo.gke_cluster_ca_certificate)}"
}

resource "kubernetes_namespace" "n" {
  metadata {
    name = "blablah"
  }
}

在这里可能显而易见的一点是,必须在创建/更新任何K8S资源之前创建/更新群集(如果这种更新依赖于群集的更新).

What may or may not be obvious here is that cluster has to be created/updated before creating/updating any K8S resources (if such update relies on updates of the cluster).

通常建议采用第二种方法(即使在错误不是一个因素且交叉提供者引用起作用的情况下),因为它可以减小爆炸半径并明确责任.这种部署很常见(IMO),只有一个人/团队负责管理集群,而另一个人负责管理K8S资源.

Taking the 2nd approach is generally advisable either way (even when/if the bug was not a factor and cross-provider references worked) as it reduces the blast radius and defines much clearer responsibility. It's (IMO) common for such deployment to have 1 person/team responsible for managing the cluster and a different one for managing K8S resources.

当然可能会有重叠-例如希望部署日志记录和操作的运维人员在新的GKE群集之上监视基础结构,因此跨提供程序的依存关系旨在满足此类用例.因此,建议您订阅上述GH问题.

There may certainly be overlaps though - e.g. ops wanting to deploy logging & monitoring infrastructure on top of a fresh GKE cluster, so cross provider dependencies aim to satisfy such use cases. For that reason I'd recommend subscribing to the GH issues mentioned above.

这篇关于Terraform:如何在具有名称空间的Google Cloud(GKE)上创建Kubernetes集群?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆