取消或撤消删除kubernetes集群中的持久卷 [英] Cancel or undo deletion of Persistent Volumes in kubernetes cluster
问题描述
偶然地尝试删除群集中的所有PV,但幸运的是它们仍然具有绑定到它们的PVC,因此所有PV都停留在状态:终止"中.
Accidentally tried to delete all PV's in cluster but thankfully they still have PVC's that are bound to them so all PV's are stuck in Status: Terminating.
我如何才能使PV脱离终止"状态并返回到健康状态,使其与PVC绑定"并完全正常工作?
How can I get the PV's out of the "terminating" status and back to a healthy state where it is "bound" to the pvc and is fully working?
这里的关键是我不想丢失任何数据,并且我想确保卷可以正常运行,并且不会因索赔消失而有被终止的风险.
The key here is that I don't want to lose any data and I want to make sure the volumes are functional and not at risk of being terminated if claim goes away.
这是PV上kubectl describe
的一些详细信息.
Here are some details from a kubectl describe
on the PV.
$ kubectl describe pv persistent-vol-1
Finalizers: [kubernetes.io/pv-protection foregroundDeletion]
Status: Terminating (lasts 1h)
Claim: ns/application
Reclaim Policy: Delete
这是对索赔的描述.
$ kubectl describe pvc application
Name: application
Namespace: ns
StorageClass: standard
Status: Bound
Volume: persistent-vol-1
推荐答案
实际上,可以将PersistentVolume
和RetainPolicy
设置为默认值(删除)来保存PersistentVolume
中的数据. 我们已经在GKE上做到了,不确定AWS或Azure,但我想它们是相似的
It is, in fact, possible to save data from your PersistentVolume
with Status: Terminating
and RetainPolicy
set to default (delete). We have done so on GKE, not sure about AWS or Azure but I guess that they are similar
我们遇到了同样的问题,如果有人遇到类似问题,我将在此处发布我们的解决方案.
We had the same problem and I will post our solution here in case somebody else has an issue like this.
您的PersistenVolumes
不会终止,直到有一个吊舱,部署或更具体的情况下-使用它的PersistentVolumeClaim
.
Your PersistenVolumes
will not be terminated until there is a pod, deployment or to be more specific - a PersistentVolumeClaim
using it.
我们采取的补救破损状态的步骤:
The steps we took to remedy our broken state:
一旦遇到OP之类的情况,您要做的第一件事就是为您的PersistentVolumes
创建快照.
Once you are in the situation lke the OP, the first thing you want to do is to create a snapshot of your PersistentVolumes
.
在GKE控制台中,转到Compute Engine -> Disks
并在那里找到您的卷(使用kubectl get pv | grep pvc-name
)并创建该卷的快照.
In GKE console, go to Compute Engine -> Disks
and find your volume there (use kubectl get pv | grep pvc-name
) and create a snapshot of your volume.
使用快照创建磁盘:gcloud compute disks create name-of-disk --size=10 --source-snapshot=name-of-snapshot --type=pd-standard --zone=your-zone
这时,请停止使用该卷的服务并删除该卷和卷声明.
使用磁盘中的数据手动重新创建卷:
Recreate the volume manually with the data from the disk:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: name-of-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
fsType: ext4
pdName: name-of-disk
persistentVolumeReclaimPolicy: Retain
现在,只需将您的音量声明更新为目标特定音量即可,即yaml文件的最后一行:
Now just update your volume claim to target a specific volume, the last line of the yaml file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: my-namespace
labels:
app: my-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: name-of-pv
这篇关于取消或撤消删除kubernetes集群中的持久卷的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!