Kubernetes报告"pod没有触发放大(如果添加了新节点,则不适合)".即使会呢? [英] Kubernetes reports "pod didn't trigger scale-up (it wouldn't fit if a new node is added)" even though it would?

查看:813
本文介绍了Kubernetes报告"pod没有触发放大(如果添加了新节点,则不适合)".即使会呢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不明白为什么会收到此错误.一个新节点肯定应该能够容纳该吊舱.由于我只请求 768Mi 的内存和 450m 的CPU,因此将自动缩放的实例组的类型为n1-highcpu-2- 2 vCPU,1.8 GB .

I don't understand why I'm receiving this error. A new node should definitely be able to accommodate the pod. As I'm only requesting 768Mi of memory and 450m of CPU, and the instance group that would be autoscaled is of type n1-highcpu-2 - 2 vCPU, 1.8GB.

我该如何进一步诊断?

kubectl描述广告连播:

Name:           initial-projectinitialabcrad-697b74b449-848bl
Namespace:      production
Node:           <none>
Labels:         app=initial-projectinitialabcrad
                appType=abcrad-api
                pod-template-hash=2536306005
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/initial-projectinitialabcrad-697b74b449
Containers:
  app:
    Image:      gcr.io/example-project-abcsub/projectinitial-abcrad-app:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     250m
      memory:  512Mi
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
  nginx:
    Image:      gcr.io/example-project-abcsub/projectinitial-abcrad-nginx:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc
    Port:       80/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Readiness:    http-get http://:80/api/v1/ping delay=5s timeout=10s period=10s #success=1 #failure=3
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
  cloudsql-proxy:
    Image:      gcr.io/cloudsql-docker/gce-proxy:1.11
    Port:       3306/TCP
    Host Port:  0/TCP
    Command:
      /cloud_sql_proxy
      -instances=example-project-abcsub:us-central1:abcfn-staging=tcp:0.0.0.0:3306
      -credential_file=/secrets/cloudsql/credentials.json
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Mounts:
      /secrets/cloudsql from cloudsql-instance-credentials (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  cloudsql-instance-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cloudsql-instance-credentials
    Optional:    false
  default-token-srv8k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-srv8k
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason             Age                  From                Message
  ----     ------             ----                 ----                -------
  Normal   NotTriggerScaleUp  4m (x29706 over 3d)  cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added)
  Warning  FailedScheduling   4m (x18965 over 3d)  default-scheduler   0/4 nodes are available: 3 Insufficient memory, 4 Insufficient cpu.

推荐答案

不是硬件请求,而是由于定义了我的pod关联性规则:

It's not the hardware requests but it's due to my pod affinity rule defined:

podAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
  - labelSelector:
      matchExpressions:
      - key: appType
        operator: NotIn
        values:
        - example-api
    topologyKey: kubernetes.io/hostname

这篇关于Kubernetes报告"pod没有触发放大(如果添加了新节点,则不适合)".即使会呢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆