为什么在 kubernetes pod 中挂载 GCE 卷会导致延迟? [英] Why does a GCE volume mount in a kubernetes pod cause a delay?

查看:25
本文介绍了为什么在 kubernetes pod 中挂载 GCE 卷会导致延迟?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

I have a kubernetes pod to which I attach a GCE persistent volume using a persistence volume claim. (For the even worse issue without a volume claim see: Mounting a gcePersistentDisk kubernetes volume is very slow)

When there is no volume attached, the pod starts in no time (max 2 seconds). But when the pod has a GCE persistent volume mount, the Running state is reached somewhere between 20 and 60 seconds. I was testing with different disk sizes (10, 200, 500 GiB) and multiple pod creations and the size does not seem to be correlated with the delay.

And this delay is not only happening in the beginning but also when rolling updates are performed with the replication controllers or when the code crashes during runtime.

Below I have the kubernetes specifications:

The replication controller

{
    "apiVersion": "v1",
    "kind": "ReplicationController",
    "metadata": {
        "name": "a1"
    },
    "spec": {
        "replicas": 1,
        "template": {
            "metadata": {
                "labels": {
                    "app": "a1"
                }
            },
            "spec": {
                "containers": [
                    {
                        "name": "a1-setup",
                        "image": "nginx",
                        "ports": [
                            {
                                "containerPort": 80
                            },
                            {
                                "containerPort": 443
                            }
                        ]
                    }
                ]
            }
        }
    }
}

The volume claim

{
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
        "name": "myclaim"
    },
    "spec": {
        "accessModes": [
            "ReadWriteOnce"
        ],
        "resources": {
            "requests": {
                "storage": "10Gi"
            }
        }
    }
}

And the volume

{
    "apiVersion": "v1",
    "kind": "PersistentVolume",
    "metadata": {
        "name": "mydisk",
        "labels": {
             "name": "mydisk"
        }
    },
    "spec": {
        "capacity": {
            "storage": "10Gi"
        },
        "accessModes": [
            "ReadWriteOnce"
        ],
        "gcePersistentDisk": {
            "pdName": "a1-drive",
            "fsType": "ext4"
        }
    }
}

Also

解决方案

GCE (along with AWS and OpenStack) must first attach a disk/volume to the node before it can be mounted and exposed to your pod. The time required for attachment is dependent on the cloud provider.

In the case of pods created by a ReplicationController, there is an additional detach operation that has to happen. The same disk cannot be attached to more than one node (at least not in read/write mode). Detaching and pod cleanup happen in a different thread than attaching. To be specific, Kubelet running on a node has to reconcile the pods it currently has (and the sum of their volumes) with the volumes currently present on the node. Orphaned volumes are unmounted and detached. If your pod was scheduled on a different node, it must wait until the original node detaches the volume.

The cluster eventually reaches the correct state, but it might take time for each component to get there. This is your wait time.

这篇关于为什么在 kubernetes pod 中挂载 GCE 卷会导致延迟?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆