如何在Kubernetes中为节点模拟nodeNotReady [英] How to simulate nodeNotReady for a node in Kubernetes
问题描述
我的ceph集群在具有 3 masters 3 worker
配置的AWS上运行.当我执行 kubectl获取节点
时,它向我显示了处于 ready
状态的所有节点.有什么方法可以手动模拟以获取节点的 nodeNotReady
错误?.
My ceph cluster is running on AWS with 3 masters 3 workers
configuration. When I do kubectl get nodes
it shows me all the nodes in the ready
state.
Is there is any way I can simulate manually to get nodeNotReady
error for a node?.
推荐答案
不确定模拟 NotReady
- 如果目的是不计划任何新的Pod,则可以使用
kubectl Cordon节点NODE_NAME
这将在其上添加不可计划的异味,并阻止在此处计划新的广告连播. - 如果目的是驱逐现有吊舱,则可以使用
kubectlrain NODE_NAME
- if the purpose is to not schedule any new pods then you can use
kubectl cordon node NODE_NAME
This will add the unschedulable taint to it and prevent new pods from being scheduled there. - If the purpose is to evict existing pod then you can use
kubectl drain NODE_NAME
通常,您可以使用污点和宽容实现与上述目标相关的目标,那么您可以做更多的事情!
In general you can play with taints and toleration to achieve your goal related to the above and you can much more with those!
现在未就绪状态来自污染 node.kubernetes.io/未就绪 在版本1.13中,TaintBasedEvictions功能已升级为beta版本并默认启用,因此,污点由NodeController自动添加
In version 1.13, the TaintBasedEvictions feature is promoted to beta and enabled by default, hence the taints are automatically added by the NodeController 因此,如果要手动设置该污点 Therefore if you want to manually set that taint 所以要绝对查看未就绪状态这是最好的方法 So to absolutely see the NotReady status this is the best way 最后,如果您要删除特定节点中的网络,则可以像 Lastly, if you want to remove your networking in a particular node then you can taint it like this 这篇关于如何在Kubernetes中为节点模拟nodeNotReady的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
kubectl污点节点NODE_NAME node.kubernetes.io/not-ready=:NoExecute
,NodeController会自动将其重置!kubectl taint node NODE_NAME node.kubernetes.io/not-ready=:NoExecute
the NodeController will reset it automatically! kubectl污染节点NODE_NAME专用/未就绪=:NoExecute
kubectl taint node NODE_NAME dedicated/not-ready=:NoExecute