Kubernetes集群上的头盔安装或升级版本失败:服务器找不到请求的资源或升级失败:没有部署的版本 [英] Helm install or upgrade release failed on Kubernetes cluster: the server could not find the requested resource or UPGRADE FAILED: no deployed releases

查看:662
本文介绍了Kubernetes集群上的头盔安装或升级版本失败:服务器找不到请求的资源或升级失败:没有部署的版本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于使用头盔在Kubernetes集群上部署图表,因此有一天以来,我无法部署新的图表或升级现有的图表.

Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.

实际上,每次使用头盔时,我都会收到一条错误消息,告诉我无法安装或升级资源.

Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.

如果我运行helm install --name foo . -f values.yaml --namespace foo-namespace,我将得到以下输出:

If I run helm install --name foo . -f values.yaml --namespace foo-namespace I have this output:

错误:release foo失败:服务器找不到请求的 资源

Error: release foo failed: the server could not find the requested resource

如果我运行helm upgrade --install foo . -f values.yaml --namespace foo-namespacehelm upgrade foo . -f values.yaml --namespace foo-namespace,则会出现此错误:

If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error:

错误:升级失败:"foo"没有部署的版本

Error: UPGRADE FAILED: "foo" has no deployed releases

我不太明白为什么.

这是我的掌舵版本:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

在我的kubernetes集群上,运行kubectl describe pods tiller-deploy-84b... -n kube-system时,我已经部署了具有相同版本的分till:

On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system:

Name:               tiller-deploy-84b8...
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-worker-1/167.114.249.216
Start Time:         Tue, 26 Feb 2019 10:50:21 +0100
Labels:             app=helm
                    name=tiller
                    pod-template-hash=84b...
Annotations:        <none>
Status:             Running
IP:                 <IP_NUMBER>
Controlled By:      ReplicaSet/tiller-deploy-84b8...
Containers:
  tiller:
    Container ID:   docker://0302f9957d5d83db22...
    Image:          gcr.io/kubernetes-helm/tiller:v2.12.3
    Image ID:       docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 26 Feb 2019 10:50:28 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  helm-token-...:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  helm-token-...
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  26m   default-scheduler      Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
  Normal  Pulling    26m   kubelet, k8s-worker-1  pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Pulled     26m   kubelet, k8s-worker-1  Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Created    26m   kubelet, k8s-worker-1  Created container
  Normal  Started    26m   kubelet, k8s-worker-1  Started container

有人遇到过同样的问题吗?

Is someone have faced the same issue ?

更新:

这是我的实际图表名为foo的文件夹结构: 图表的结构文件夹:

This the folder structure of my actual chart named foo: structure folder of the chart:

> templates/
  > deployment.yaml 
  > ingress.yaml
  > service.yaml
> .helmignore
> Chart.yaml 
> values.yaml

我已经尝试使用删除命令helm del --purge foo删除图表,但失败了,但是发生了相同的错误.

I have already tried to delete the chart in failure using the delete command helm del --purge foo but the same errors occurred.

更准确地说,图表foo实际上是使用我自己的私有注册表的自定义图表. ImagePullSecret通常正在设置.

Just to be more precise, the chart foo is in fact a custom chart using my own private registry. ImagePullSecret are normally setting up.

我已经运行了这两个命令helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force,我仍然收到错误消息:

I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force and I still get an error:

UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource

注意foo名称空间已经存在.因此,错误不是源于名称空间名称或名称空间本身.确实,如果运行helm list,我可以看到 foo 图表处于FAILED状态.

Notice that foo-namespace already exist. So the error don't come from the namespace name or the namespace itself. Indeed, if I run helm list, I can see that the foo chart is in a FAILED status.

推荐答案

Tiller将所有版本作为ConfigMaps存储在Tiller的命名空间(在您的情况下为kube-system).尝试使用以下命令查找损坏的发行版并删除它的ConfigMap:

Tiller stores all releases as ConfigMaps in Tiller's namespace(kube-system in your case). Try to find broken release and delete it's ConfigMap using commands:

$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE     NAME               DATA   AGE
kube-system   nginx-ingress.v1   1      22h

$ kubectl delete cm  nginx-ingress.v1 -n kube-system

下一步,手动删除所有发行版对象(部署,服务,入口等),然后再次使用Helm重新安装发行版.

Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.

如果没有帮助,您可以尝试下载Helm()的更新的版本. v2.14.3)并更新/重新安装Tiller.

If it didn't help, you may try to download newer release of Helm (v2.14.3 at the moment) and update/reinstall Tiller.

这篇关于Kubernetes集群上的头盔安装或升级版本失败:服务器找不到请求的资源或升级失败:没有部署的版本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆