Terraform:错误:Kubernetes群集无法访问:无效的配置 [英] Terraform: Error: Kubernetes cluster unreachable: invalid configuration

查看:94
本文介绍了Terraform:错误:Kubernetes群集无法访问:无效的配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

删除带有"terraform destroy"的kubernetes集群后,我无法再创建它.

After deleting kubernetes cluster with "terraform destroy" I can't create it again anymore.

"terraform apply"表示返回以下错误消息:

"terraform apply" returns the following error message:

错误:Kubernetes群集不可访问:无效的配置:否提供了配置,请尝试设置KUBERNETES_MASTER环境变量

Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

这是Terraform配置:

Here is the terraform configuration:

terraform {
  backend "s3" {
    bucket = "skyglass-msur"
    key    = "terraform/backend"
    region = "us-east-1"
  }
}

locals {
  env_name         = "staging"
  aws_region       = "us-east-1"
  k8s_cluster_name = "ms-cluster"
}

variable "mysql_password" {
  type        = string
  description = "Expected to be retrieved from environment variable TF_VAR_mysql_password"
}

provider "aws" {
  region = local.aws_region
}

data "aws_eks_cluster" "msur" {
  name = module.aws-kubernetes-cluster.eks_cluster_id
}

module "aws-network" {
  source = "github.com/skyglass-microservices/module-aws-network"

  env_name              = local.env_name
  vpc_name              = "msur-VPC"
  cluster_name          = local.k8s_cluster_name
  aws_region            = local.aws_region
  main_vpc_cidr         = "10.10.0.0/16"
  public_subnet_a_cidr  = "10.10.0.0/18"
  public_subnet_b_cidr  = "10.10.64.0/18"
  private_subnet_a_cidr = "10.10.128.0/18"
  private_subnet_b_cidr = "10.10.192.0/18"
}

module "aws-kubernetes-cluster" {
  source = "github.com/skyglass-microservices/module-aws-kubernetes"

  ms_namespace       = "microservices"
  env_name           = local.env_name
  aws_region         = local.aws_region
  cluster_name       = local.k8s_cluster_name
  vpc_id             = module.aws-network.vpc_id
  cluster_subnet_ids = module.aws-network.subnet_ids

  nodegroup_subnet_ids     = module.aws-network.private_subnet_ids
  nodegroup_disk_size      = "20"
  nodegroup_instance_types = ["t3.medium"]
  nodegroup_desired_size   = 1
  nodegroup_min_size       = 1
  nodegroup_max_size       = 5
}

# Create namespace
# Use kubernetes provider to work with the kubernetes cluster API
provider "kubernetes" {
  # load_config_file       = false
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.msur.certificate_authority.0.data)
  host                   = data.aws_eks_cluster.msur.endpoint
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command     = "aws-iam-authenticator"
    args        = ["token", "-i", "${data.aws_eks_cluster.msur.name}"]
  }
}

# Create a namespace for microservice pods
resource "kubernetes_namespace" "ms-namespace" {
  metadata {
    name = "microservices"
  }
}

P.S.terraform kubernetes提供程序的0.14.7似乎存在问题

P.S. There seems to be the issue with terraform kubernetes provider for 0.14.7

我无法使用"load_config_file"=在此版本中为false,因此我不得不对其进行评论,这似乎是此问题的原因.

I couldn't use "load_config_file" = false in this version, so I had to comment it, which seems to be the reason of this issue.

P.P.S.terraform尝试使用过时的cluster_ca_certificate也可能是该问题:尽管我不确定该证书的存储位置,删除该证书就足够了.

P.P.S. It could also be the issue with outdated cluster_ca_certificate, which terraform tries to use: deleting this certificate could be enough, although I'm not sure, where it is stored.

推荐答案

在直接进行诸如操纵状态之类的根本操作之前,请尝试设置KUBE_CONFIG_PATH变量:

Before doing something radical like manipulating the state directly, try setting the KUBE_CONFIG_PATH variable:

export KUBE_CONFIG_PATH=/path/to/.kube/config

此后,重新运行 plan apply 命令.这已经为我解决了这个问题.

After this rerun the plan or apply command. This has fixed the issue for me.

这篇关于Terraform:错误:Kubernetes群集无法访问:无效的配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆