Kubernetes HPA并未按预期降级 [英] Kubernetes HPA not downscaling as expected

查看:76
本文介绍了Kubernetes HPA并未按预期降级的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

发生了什么: 我已经使用以下详细信息配置了hpa:

What happened: I've configured a hpa with these details:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: api-horizontalautoscaler
  namespace: develop
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: api-deployment
  minReplicas: 1
  maxReplicas: 4
  metrics:
  - type: Resource
    resource:

      name: memory
      targetAverageValue: 400Mib

我期望发生的事情: 当我们施加一些负载并且平均内存超过了预期的400时,pod会扩展到3个.现在,平均内存已回落到大约300,即使荚已经低于目标几个小时,荚也没有缩小.

What I expected to happen: The pods scaled up to 3 when we put some load and the average memory exceeded 400 which was expected. Now the average memory has gone back down to roughly 300 and still the pods haven't scaled down even though they have been below the target for a couple of hours now.

一天后:

我希望当内存下降到400以下时,pod会按比例缩小

I expected the pods to scale down when the memory fell below 400

环境:

  • Kubernetes版本(使用kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.9", GitCommit:"3e4f6a92de5f259ef313ad876bb008897f6a98f0", GitTreeState:"clean", BuildDate:"2019-08-05T09:22:00Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}re configuration:

  • 操作系统(例如:cat /etc/os-release):
    • OS (e.g: cat /etc/os-release):
    • > cat /etc/os-release
      NAME="Ubuntu"
      VERSION="18.04.3 LTS (Bionic Beaver)"
      

      • 内核(例如<​​c2>): x86_64 x86_64 x86_64 GNU/Linux
        • Kernel (e.g. uname -a): x86_64 x86_64 x86_64 GNU/Linux
        • 我真的很想知道为什么会这样.我很乐意提供所需的任何信息.

          I would really like to know why this is. Any information needed I will be happy to provide.

          谢谢!

          推荐答案

          有两件事要看:

          测试版,其中包括对内存和内存扩展的支持 自定义指标,可以在autoscaling/v2beta2中找到.新领域 在autoscaling/v2beta2中引入的内容在以下情况下保留为注释 与autoscaling/v1一起工作.

          The beta version, which includes support for scaling on memory and custom metrics, can be found in autoscaling/v2beta2. The new fields introduced in autoscaling/v2beta2 are preserved as annotations when working with autoscaling/v1.

          autoscaling/v2beta2是K8s 1.12中引入的,因此尽管您使用的是1.13(现在是6个主要版本),它也可以正常工作(但是,建议升级到较新的版本).尝试将您的apiVersion:更改为autoscaling/v2beta2.

          The autoscaling/v2beta2 was introduced in K8s 1.12 so despite the fact you are using 1.13 (which is 6 major versions old now) it should work fine (however, upgrading to a newer version is recommended). Try changing your apiVersion: to autoscaling/v2beta2.

          --horizontal-pod-autoscaler-downscale-stabilization:的值 此选项是一个持续时间,用于指定自动定标器具有的时间 等待,然后再执行另一次降级操作 目前已经完成.默认值为5分钟(5m0s).

          --horizontal-pod-autoscaler-downscale-stabilization: The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).

          更改上面建议的API后,请检查此特定标志的值.

          Check the value of this particular flag after changing the API suggested above.

          这篇关于Kubernetes HPA并未按预期降级的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆