在身份验证配置中找不到 Pod 执行角色或没有所有必需的权限.我该如何调试? [英] Pod execution role is not found in auth config or does not have all required permissions. How can I debug?

查看:15
本文介绍了在身份验证配置中找不到 Pod 执行角色或没有所有必需的权限.我该如何调试?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望能够使用 Fargate 部署 AWS EKS.我已经成功地使用 node_group 进行了部署.但是,当我转向使用 Fargate 时,似乎所有 Pod 都卡在了待处理状态.

I want o be able to deploy AWS EKS using Fargate. I have successfully made the deployment work with a node_group. However, when I shifted to using Fargate, it seems that the pods are all stuck in the pending state.

我正在使用 Terraform 进行配置(不一定要寻找 Terraform 答案).这就是我创建 EKS 集群的方式:

I am provisioning using Terraform (not necessarily looking for a Terraform answer). This is how I create my EKS Cluster:

module "eks_cluster" {
  source                            = "terraform-aws-modules/eks/aws"
  version                           = "13.2.1"
  cluster_name                      = "${var.project_name}-${var.env_name}"
  cluster_version                   = var.cluster_version
  vpc_id                            = var.vpc_id
  cluster_enabled_log_types         = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  enable_irsa                       = true
  subnets                           = concat(var.private_subnet_ids, var.public_subnet_ids)
  create_fargate_pod_execution_role = true
  write_kubeconfig                  = false
  fargate_pod_execution_role_name   = "${var.project_name}-role"
  # Assigning worker groups
  node_groups = {
    my_nodes = {
      desired_capacity = 1
      max_capacity     = 1
      min_capacity     = 1
      instance_type    = var.nodes_instance_type
      subnets          = var.private_subnet_ids
    }
  }
}

这就是我配置 Fargate 配置文件的方式:

And this is how I provision the Fargate profile:

//#  Create EKS Fargate profile
resource "aws_eks_fargate_profile" "fargate_profile" {
  cluster_name           = module.eks_cluster.cluster_id
  fargate_profile_name   = "${var.project_name}-fargate-profile-${var.env_name}"
  pod_execution_role_arn = aws_iam_role.fargate_iam_role.arn
  subnet_ids             = var.private_subnet_ids

  selector {
    namespace = var.project_name
  }
}

这就是我创建和附加所需策略的方式:

And this is how I created and attach the required policies:

//# Create IAM Role for Fargate Profile
resource "aws_iam_role" "fargate_iam_role" {
  name                  = "${var.project_name}-fargate-role-${var.env_name}"
  force_detach_policies = true
  assume_role_policy    = jsonencode({
    Statement = [{
      Action    = "sts:AssumeRole"
      Effect    = "Allow"
      Principal = {
        Service = "eks-fargate-pods.amazonaws.com"
      }
    }]
    Version   = "2012-10-17"
  })
}

# Attach IAM Policy for Fargate
resource "aws_iam_role_policy_attachment" "fargate_pod_execution" {
  role       = aws_iam_role.fargate_iam_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
}

我尝试过但似乎不起作用

运行 kubectl describe pod 我得到:

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  14s   fargate-scheduler  Misconfigured Fargate Profile: fargate profile fargate-airflow-fargate-profile-dev blocked for new launches due to: Pod execution role is not found in auth config or does not have all required permissions for launching fargate pods.

我尝试过但没有成功的其他事情

我尝试通过模块的功能映射角色,例如:

Other things I have tried but without success

I have tried mapping the role via the module's feature like:

module "eks_cluster" {
  source                            = "terraform-aws-modules/eks/aws"
  version                           = "13.2.1"
  cluster_name                      = "${var.project_name}-${var.env_name}"
  cluster_version                   = var.cluster_version
  vpc_id                            = var.vpc_id
  cluster_enabled_log_types         = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  enable_irsa                       = true
  subnets                           = concat(var.private_subnet_ids, var.public_subnet_ids)
  create_fargate_pod_execution_role = true
  write_kubeconfig                  = false
  fargate_pod_execution_role_name   = "${var.project_name}-role"
  # Assigning worker groups
  node_groups = {
    my_nodes = {
      desired_capacity = 1
      max_capacity     = 1
      min_capacity     = 1
      instance_type    = var.nodes_instance_type
      subnets          = var.private_subnet_ids
    }
  }
# Trying to map role
  map_roles = [
    {
      rolearn  = aws_eks_fargate_profile.airflow.arn
      username = aws_eks_fargate_profile.airflow.fargate_profile_name
      groups   = ["system:*"]
    }
  ]
}

但是我的尝试没有成功.我该如何调试这个问题?其背后的原因是什么?

But my attempt was not successful. How can I debug this issue? And what is the cause behind it?

推荐答案

好的,我明白你的问题了.我也刚修好我的,虽然我用了不同的方法.

Okay, I see your problems. I just fixed mine, too, though I used different methods.

在您的 eks_cluster 模块中,您已经告诉模块创建角色并为其提供名称,因此以后无需创建角色资源.该模块应该为您处理它,包括在 Kubernetes 中填充 aws-auth 配置映射.

In your eks_cluster module, you already tell the module to create the role and provide a name to it, so there's no need to create a role resource later. The module should handle it for you, including populating the aws-auth configmap within Kubernetes.

在您的 aws_eks_fargate_profile 资源中,您应该使用模块提供的角色,即 pod_execution_role_arn = module.eks_cluster.fargate_profile_arns[0].

In your aws_eks_fargate_profile resource, you should use the role provided by the module, i.e. pod_execution_role_arn = module.eks_cluster.fargate_profile_arns[0].

我相信修复这些应该可以解决您第一次配置尝试的问题.

I believe fixing those up should solve your issue for the first configuration attempt.

第二次尝试时,map_roles 输入用于 IAM 角色,但您提供的是有关 Fargate 配置文件的信息.您想做以下两件事之一:

For your second attempt, the map_roles input is for IAM roles, but you're supplying info about Fargate profiles. You want to do one of two things:

  1. 禁用创建您的角色的模块(create_fargate_pod_execution_rolefargate_pod_execution_role_name),而是像您在第一个配置中所做的那样创建您自己的 IAM 角色,并将该信息提供给 <代码>map_roles.
  2. 删除 map_roles 并在您的 Fargate 配置文件中引用模块生成的 IAM 角色,类似于您的第一次配置的解决方案.
  1. Disable the module creating your roles (create_fargate_pod_execution_role and fargate_pod_execution_role_name) and instead create your own IAM role similarly to how you did in the first configuration and supply that info to map_roles.
  2. Remove map_roles and in your Fargate profile reference the IAM role generated by the module, similarly to the solution for your first configuration.

如果有任何令人困惑的地方,请告诉我.看来你们真的很亲近!

If any of this is confusing, please let me know. It seems you're really close!

这篇关于在身份验证配置中找不到 Pod 执行角色或没有所有必需的权限.我该如何调试?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆