无法删除Kubernetes中的所有Pod-清除/重新启动Kubernetes [英] Unable to delete all pods in Kubernetes - Clear/restart Kubernetes
问题描述
我正在尝试删除/删除在我的环境中运行的所有Pod。当我发出
I am trying to delete/remove all the pods running in my environment. When I issue
docker ps
docker ps
我得到以下输出。这是一个示例屏幕截图。如您所见,它们都是K8。我想删除/删除所有吊舱。
I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them.
我尝试了以下所有方法,但它们不断出现
I tried all the below approaches but they keep appearing again and again
sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces
sudo kubectl delete --all pods --namespace=kube-system # shows "pod xxx deleted"
sudo kubectl get deployments # returns no resources found
除了上述内容,我还尝试使用 docker stop
和 docker rm
具有容器ID,但它们会重新生成。我想清除所有这些。我想从头开始
Apart from the above, I also tried using docker stop
and docker rm
with the container ids but they either respawn. I want to clear all of them. I would like to start from the beginning
我已经注销并多次登录,但是我仍然看到这些项目。
I have logged out and logged back in multiple times but still I see those items.
您能帮我删除所有吊舱吗,我希望 docker ps的输出中没有上面所示的任何kubernetes相关项目吗?
Can you help me to delete all the pods and I expect the output of "docker ps" to not have any kubernetes related items like shown above?
推荐答案
sudo kubectl获取部署#返回未找到资源
部署
并不是Kubernetes中唯一可以管理您的Pod的控制器。有很多: StatefulSet
, ReplicaSet
等。(有关详细信息,请查看控制器文档)
简而言之,Controller负责确保其管理的所有Pod都在运行,并在必要时创建它们-删除所有Pod时,关联的Controller会意识到它们丢失了,只需重新-创建它们以确保其符合其规格。
In short, a Controller is responsible of ensuring all Pods it manages are running, and create them if necessary - when you delete all Pods, the associated Controller will realise they are missing and simply re-create them to ensure it matchs its specifications.
如果要有效删除所有Pod,则应删除所有相关的Controller(或更新它们以将其副本设置为0),例如:
If you want to effectively delete all Pods, you should delete all the related Controllers (or update them to set their replica to 0), such as:
# do NOT run this in the kube-system namespace, it may corrupt your cluster
# You can also specify --namespace xxx to delete in a specific namespace
kubectl delete deployment --all # configures ReplicaSets, deleting Deployments should delete ReplicaSet as well as associated Pods
kubectl delete statefulset --all
kubectl delete daemonset --all
kubectl delete job --all
kubectl delete cronjob --all
kubectl delete replicationcontroller --all # their should not be any ReplicationController as Deployment should be used
# Then delete what you find
编辑:如P Emkambaram 的答案是,您也可以使用 kubectl delete命名空间mynamespace
删除整个命名空间(不要删除 kube-system
),但它也可能删除名称空间中的其他组件,例如服务
as mentionned by P Emkambaram answer's you can also delete the entire namespace with kubectl delete namespace mynamespace
(don't delete kube-system
of course) but it may also delete other components in the namespace such as Services
重要说明:
- 在删除
kube中的Pod或对象时,也应注意-sys tem
命名空间,它与集群本身的内部管道有关。 - 您不应通过使用
docker
命令,可能会产生意想不到的效果,请改用kubectl
。
- You should also take care when deleting Pods or object in the
kube-system
namespace which are related to the internal plumbing of the cluster itself. - You should not delete directly Kubernetes components by deleting their underlying containers with
docker
commands, this may have unexpected effects, usekubectl
instead.
这篇关于无法删除Kubernetes中的所有Pod-清除/重新启动Kubernetes的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!