如何将Kubernetes节点状态从“就绪"更改为进入“未就绪"状态通过更改CPU利用率或内存利用率或磁盘压力? [英] How to Change Kubernetes Node Status from "Ready" to "NotReady" by changing CPU Utilization or memory utilization or Disk Pressure?

查看:202
本文介绍了如何将Kubernetes节点状态从“就绪"更改为进入“未就绪"状态通过更改CPU利用率或内存利用率或磁盘压力?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将Kubernetes集群设置为1个主节点和1个工作节点.出于测试目的,我将CPU利用率和内存利用率提高了100%,但让Node的状态没有变为"NotReady". 我正在测试压力状态..如何将MemoryPressure的状态标志更改为true或DiskPressure或PIDPressure更改为true

I have kubernetes cluster set up of 1 master and 1 worker node.For testing purpose , I increased CPU Utlization and memory utilization upto 100%, but stilling Node is not getting status "NotReady". I am testing Pressure Status .. How to change status flag of MemoryPressure to true or DiskPressure or PIDPressure to true

这是我的主节点条件:-

Here are my master node condtions :-

条件:

 Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 27 Nov 2019 14:36:29 +0000   Wed, 27 Nov 2019 14:36:29 +0000   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Thu, 28 Nov 2019 07:36:46 +0000   Fri, 22 Nov 2019 13:30:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 28 Nov 2019 07:36:46 +0000   Fri, 22 Nov 2019 13:30:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 28 Nov 2019 07:36:46 +0000   Fri, 22 Nov 2019 13:30:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 28 Nov 2019 07:36:46 +0000   Fri, 22 Nov 2019 13:30:48 +0000   KubeletReady                 kubelet is posting ready status

以下是广告连播信息:-

Here are pods info :-

Non-terminated Pods:         (8 in total)
  Namespace                  Name                                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                                                   ------------  ----------  ---------------  -------------  ---
  kube-system                coredns-5644d7b6d9-dm8v7                                               100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22d
  kube-system                coredns-5644d7b6d9-mz5rm                                               100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22d
  kube-system                etcd-ip-172-31-28-186.us-east-2.compute.internal                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                kube-apiserver-ip-172-31-28-186.us-east-2.compute.internal             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                kube-controller-manager-ip-172-31-28-186.us-east-2.compute.internal    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                kube-proxy-cw8vv                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                kube-scheduler-ip-172-31-28-186.us-east-2.compute.internal             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                weave-net-ct9zb                                                        20m (1%)      0 (0%)      0 (0%)           0 (0%)         22d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                770m (38%)  0 (0%)
  memory             140Mi (1%)  340Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)

此处为工作节点:-

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Thu, 28 Nov 2019 07:00:08 +0000   Thu, 28 Nov 2019 07:00:08 +0000   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Thu, 28 Nov 2019 07:39:03 +0000   Thu, 28 Nov 2019 07:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Thu, 28 Nov 2019 07:39:03 +0000   Thu, 28 Nov 2019 07:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Thu, 28 Nov 2019 07:39:03 +0000   Thu, 28 Nov 2019 07:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Thu, 28 Nov 2019 07:39:03 +0000   Thu, 28 Nov 2019 07:00:00 +0000   KubeletReady                 kubelet is posting ready status

推荐答案

有几种方法可以使节点进入NotReady状态,但不能通过Pods.当Pod开始消耗过多内存时,Kubelet只会杀死该Pod,以精确地保护节点.

There are several ways of making a node to get into NotReady status, but not through Pods. When a Pod starts to consume too much memory kubelet will just kill that pod, to precisely protect the node.

我猜您想测试当节点发生故障时会发生什么,在这种情况下,您希望将其耗尽.换句话说,要模拟节点问题,您应该执行以下操作:

I guess you want to test what happens when a node goes down, in that case you want to drain it. In other words, to simulate node issues, you should do:

kubectl drain NODE

仍然,检查kubectl drain --help以查看在什么情况下会发生什么情况.

Still, check on kubectl drain --help to see under what circumstances what happens.

编辑

实际上,我尝试访问该节点并直接在该节点上施加压力,这是20秒内发生的事情:

I tried, actually, accessing the node and running stress on the node directly, and this is what happened within 20 seconds:

root@gke-klusta-lemmy-3ce02acd-djhm:/# stress --cpu 16 --io 8 --vm 8 --vm-bytes 2G

检查节点:

$ kubectl get no -w | grep gke-klusta-lemmy-3ce02acd-djhm
gke-klusta-lemmy-3ce02acd-djhm   Ready    <none>   15d     v1.13.11-gke.14
gke-klusta-lemmy-3ce02acd-djhm   Ready   <none>   15d   v1.13.11-gke.14
gke-klusta-lemmy-3ce02acd-djhm   NotReady   <none>   15d   v1.13.11-gke.14
gke-klusta-lemmy-3ce02acd-djhm   NotReady   <none>   15d   v1.13.11-gke.14
gke-klusta-lemmy-3ce02acd-djhm   NotReady   <none>   15d   v1.13.11-gke.14

我正在运行非常弱的节点. 1CPU @ 4GB内存

I am running pretty weak nodes. 1CPU@4GB RAM

这篇关于如何将Kubernetes节点状态从“就绪"更改为进入“未就绪"状态通过更改CPU利用率或内存利用率或磁盘压力?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆