创建PV和PVC后,容器在minikube中继续为Pod崩溃 [英] Container keeps crashing for Pod in minikube after the creation of PV and PVC

查看:304
本文介绍了创建PV和PVC后,容器在minikube中继续为Pod崩溃的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个与kubernetes集成的REST应用程序,用于测试REST查询.现在,当我在客户端执行POST查询时,自动创建的作业状态将无限期保持为PENDING.同样会自动创建的POD也会发生

当我深入查看仪表板中的事件时,它会附加该卷,但无法挂载该卷并给出此错误:

Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11-27d51822d85b)": timeout expired waiting for volumes to attach or mount for pod "default"/"ingestion-88dhg". list of unmounted volumes=[cdiworkspace-volume]. list of unattached volumes=[cdiworkspace-volume default-token-qz2nb]

我已经使用以下代码手动定义了持久卷和持久卷声明,但是没有连接到任何吊舱.我应该那样做吗?

PV

{
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "cdiworkspace",
    "selfLink": "/api/v1/persistentvolumes/cdiworkspace",
    "uid": "92252f76-fe51-4225-9b63-4d6228d9e5ea",
    "resourceVersion": "100026",
    "creationTimestamp": "2019-07-10T09:49:04Z",
    "annotations": {
      "pv.kubernetes.io/bound-by-controller": "yes"
    },
    "finalizers": [
      "kubernetes.io/pv-protection"
    ]
  },
  "spec": {
    "capacity": {
      "storage": "10Gi"
    },
    "fc": {
      "targetWWNs": [
        "50060e801049cfd1"
      ],
      "lun": 0
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "claimRef": {
      "kind": "PersistentVolumeClaim",
      "namespace": "default",
      "name": "cdiworkspace",
      "uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
      "apiVersion": "v1",
      "resourceVersion": "98688"
    },
    "persistentVolumeReclaimPolicy": "Retain",
    "storageClassName": "standard",
    "volumeMode": "Block"
  },
  "status": {
    "phase": "Bound"
  }
}

PVC

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "cdiworkspace",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/cdiworkspace",
    "uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
    "resourceVersion": "100028",
    "creationTimestamp": "2019-07-10T09:32:16Z",
    "annotations": {
      "pv.kubernetes.io/bind-completed": "yes",
      "pv.kubernetes.io/bound-by-controller": "yes",
      "volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
    },
    "finalizers": [
      "kubernetes.io/pvc-protection"
    ]
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "10Gi"
      }
    },
    "volumeName": "cdiworkspace",
    "storageClassName": "standard",
    "volumeMode": "Block"
  },
  "status": {
    "phase": "Bound",
    "accessModes": [
      "ReadWriteOnce"
    ],
    "capacity": {
      "storage": "10Gi"
    }
  }
}

journalctl -xe _SYSTEMD_UNIT=kubelet.service

的结果

Jul 01 09:47:26 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:26.979098   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:40 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:40.979722   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:55.978806   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:08.979375   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:23 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:23.979463   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:37 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:37.979005   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:48 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:48.977686   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:02 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:02.979125   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:17 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:17.979408   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:28 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:28.977499   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:41 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:41.977771   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:53 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:53.978605   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:05 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:05.980251   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:16 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:16.979292   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:31 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:31.978346   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:42 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:42.979302   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:55.978043   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:08.977540   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190929   22759 remote_image.go:113] PullImage "friendly/myplanet:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = E
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190971   22759 kuberuntime_image.go:51] Pull image "friendly/myplanet:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response 
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.191024   22759 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon:

部署Yaml

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: back
  template:
    metadata:
      labels:
        app: back
    spec:
      containers:
      - name: back
        image: back:latest
        ports:
        - containerPort: 8081
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: back
      volumes:
      - name: back
        hostPath:
          # directory location on host
          path: /back
          # this field is optional
          type: Directory

Dockerfile

FROM python:3.7-stretch

COPY . /code

WORKDIR /code

CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"

RUN pip install -r requirements.txt

ENTRYPOINT ["python", "ingestion.py"]

pyython文件1

import os
import shutil
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")

import requests

import datahub

scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"

logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
                        password=scihub_password,
                        producttype=os.getenv("producttype"),
                        platformname=os.getenv("platformname"),
                        days_back=os.getenv("days_back", 2),
                        footprint=os.getenv("footprint"),
                        max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
                        start_date = os.getenv("start_date"),
                        end_date = os.getenv("end_date"))

logger.info("Found {} relevant scenes".format(len(scenes)))

job_results = []
for scene in scenes:
    # do not donwload a scene that has already been ingested
    if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
        logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
        filename = scene["title"]+".SAFE"
    else:
        logger.info("Starting the download of scene {}".format(scene["title"]))
        filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
        logger.info("The download was successful.")
        shutil.move(filename, "/out_data")
    result_message = {"description": "test",
                      "type": "Raster",
                      "format": "SAFE",
                      "filename": os.path.basename(filename)}
    job_results.append(result_message)

res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()

** python文件2 **

import logging
import os
import urllib.parse
import zipfile

import requests

# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"

logger = logging.getLogger(__name__)

def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
    search_terms = []
    if producttype:
        search_terms.append("producttype:{}".format(producttype))
    if platformname:
        search_terms.append("platformname:{}".format(platformname))
    if start_date and end_date:
        search_terms.append(
            "beginPosition:[{}+TO+{}]".format(start_date, end_date))
    elif days_back:
        search_terms.append(
            "beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
    if footprint:
        search_terms.append("footprint:%22Intersects({})%22".format(
            footprint.replace(" ", "+")))
    if max_cloud_cover_percentage:
        search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
    url = SITE["SEARCH"] + "+AND+".join(search_terms)
    return url


def _unpack(zip_file, directory, remove_after=False):
    with zipfile.ZipFile(zip_file) as zf:
        # This assumes that the zipfile only contains the .SAFE directory at root level
        safe_path = zf.namelist()[0]
        zf.extractall(path=directory)
    if remove_after:
        os.remove(zip_file)
    return os.path.normpath(os.path.join(directory, safe_path))


def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
    """ Search the Copernicus SciHub

    Parameters
    ----------
    username : str
      user name for the Copernicus SciHub
    password : str
      password for the Copernicus SciHub
    producttype : str, optional
      product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    platformname : str, optional 
      plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    days_back : int, optional
      number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
    footprint : str, optional
      well-known-text representation of the footprint
    max_cloud_cover_percentage: str, optional
      percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery. 
      (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    start_date: str, optional
        start point of the search extent has to be used in combination with end_date
    end_date: str, optional
        end_point of the search extent has to be used in combination with start_date

    Returns
    -------
    list
      a list of scenes that match the search parameters
    """

    import xml.etree.cElementTree as ET
    scenes = []
    search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
    logger.info("Search URL: {}".format(search_url))
    offset = 0
    rowsBreak = 5000
    name_space = {"atom": "http://www.w3.org/2005/Atom",
                  "opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
    while offset < rowsBreak:  # Next pagination page:
        response = requests.get(search_url.format(offset=offset), auth=(username, password))
        root = ET.fromstring(response.content)
        if offset == 0:
            rowsBreak = int(
                root.find("opensearch:totalResults", name_space).text)
        for e in root.iterfind("atom:entry", name_space):
            uuid = e.find("atom:id", name_space).text
            title = e.find("atom:title", name_space).text
            begin_position = e.find(
                "atom:date[@name='beginposition']", name_space).text
            end_position = e.find(
                "atom:date[@name='endposition']", name_space).text
            footprint = e.find("atom:str[@name='footprint']", name_space).text
            scenes.append({
                "id": uuid,
                "title": title,
                "begin_position": begin_position,
                "end_position": end_position,
                "footprint": footprint})
        # Ultimate DHuS pagination page size limit (rows per page).
        offset += 100
    return scenes


def download(scene, directory, username, password, unpack=True):
    """ Download a Sentinel scene based on its uuid

    Parameters
    ----------
    scene : dict
        the scene to be downloaded
    path : str
        the path where the file will be downloaded to
    username : str
        username for the Copernicus SciHub
    password : str
        password for the Copernicus SciHub
    unpack: boolean, optional
        flag that defines whether the downloaded product should be unpacked after download. defaults to true

    Raises
    ------
    ValueError
        if the size of the downloaded file does not match the Content-Length header
    ValueError
        if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub

    Returns
    -------
    str
        path to the downloaded file
    """

    import hashlib
    md5hash = hashlib.md5()
    md5sum = requests.get(SITE["CHECKSUM"].format(
        uuid=scene["id"]), auth=(username, password)).text

    download_path = os.path.join(directory, scene["title"] + ".zip")
    # overwrite if path already exists
    if os.path.exists(download_path):
        os.remove(download_path)
    url = SITE["SAFEZIP"].format(uuid=scene["id"])
    rsp = requests.get(url, auth=(username, password), stream=True)
    cl = rsp.headers.get("Content-Length")
    size = int(cl) if cl else -1
    # Actually fetch now:
    with open(download_path, "wb") as f:  # Do not read as a whole into memory:
        written = 0
        for block in rsp.iter_content(8192):
            f.write(block)
            written += len(block)
            md5hash.update(block)
    written = os.path.getsize(download_path)
    if size > -1 and written != size:
        raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
            download_path, written, size))
    elif md5sum:
        calculated = md5hash.hexdigest()
        expected = md5sum.lower()
        if calculated != expected:
            raise ValueError("{}: MD5 mismatch, calculated {} but expected {}!".format(
                download_path, calculated, expected))
    if unpack:
        return _unpack(download_path, directory, remove_after=False)
    else:
        return download_path

如何将音量正确且自动地安装到吊舱上? 我不想为每个REST服务手动创建Pod并为其分配卷

解决方案

我再次浏览了pod的日志,并意识到未提供 python file1 所需的参数,并导致了容器崩溃.我通过提供日志中指出的所有缺少的参数并在deployment.yaml中为现在看起来像这样的Pod提供了这些参数来进行测试:

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: back
  template:
    metadata:
      creationTimestamp: 
      labels:
        app: back
    spec:
      containers:
      - name: back
        image: back:latest
        imagePullPolicy: Never
        env:
        - name: scihub_username
          value: test
        - name: scihub_password
          value: test
        - name: CDINRW_BASE_URL
          value: 10.1.40.11:8081/swagger-ui.html
        - name: CDINRW_JOB_ID
          value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
        ports:
        - containerPort: 8081
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: test-volume
      volumes:
      - name: test-volume
        hostPath:
          # directory location on host
          path: /back
          # this field is optional
          type: Directory

这开始下载数据并暂时解决了问题,但是这不是我希望它运行的方式,因为我希望通过提供所有参数并启动和停止该容器的REST API来触发它.我将为此创建一个单独的问题,并将其链接到下方,以供任何人关注.

i have a REST application integrated with kubernetes for testing REST queries. Now when i execute a POST query on my client side the status of the job which is automatically created remains PENDING indefinitely. The same happens with the POD which is also created automatically

When i looked deeper into the events in dashboard, it attaches the volume but is unable to mount the volume and gives this error :

Unable to mount volumes for pod "ingestion-88dhg_default(4a8dd589-e3d3-4424-bc11-27d51822d85b)": timeout expired waiting for volumes to attach or mount for pod "default"/"ingestion-88dhg". list of unmounted volumes=[cdiworkspace-volume]. list of unattached volumes=[cdiworkspace-volume default-token-qz2nb]

i have defined the persistent volume and persistent volume claim manually using following codes but did not connect to any pods. Should i do that?

PV

{
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "cdiworkspace",
    "selfLink": "/api/v1/persistentvolumes/cdiworkspace",
    "uid": "92252f76-fe51-4225-9b63-4d6228d9e5ea",
    "resourceVersion": "100026",
    "creationTimestamp": "2019-07-10T09:49:04Z",
    "annotations": {
      "pv.kubernetes.io/bound-by-controller": "yes"
    },
    "finalizers": [
      "kubernetes.io/pv-protection"
    ]
  },
  "spec": {
    "capacity": {
      "storage": "10Gi"
    },
    "fc": {
      "targetWWNs": [
        "50060e801049cfd1"
      ],
      "lun": 0
    },
    "accessModes": [
      "ReadWriteOnce"
    ],
    "claimRef": {
      "kind": "PersistentVolumeClaim",
      "namespace": "default",
      "name": "cdiworkspace",
      "uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
      "apiVersion": "v1",
      "resourceVersion": "98688"
    },
    "persistentVolumeReclaimPolicy": "Retain",
    "storageClassName": "standard",
    "volumeMode": "Block"
  },
  "status": {
    "phase": "Bound"
  }
}

PVC

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "cdiworkspace",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/persistentvolumeclaims/cdiworkspace",
    "uid": "0ce96c77-9e0d-4b1f-88bb-ad8b84072000",
    "resourceVersion": "100028",
    "creationTimestamp": "2019-07-10T09:32:16Z",
    "annotations": {
      "pv.kubernetes.io/bind-completed": "yes",
      "pv.kubernetes.io/bound-by-controller": "yes",
      "volume.beta.kubernetes.io/storage-provisioner": "k8s.io/minikube-hostpath"
    },
    "finalizers": [
      "kubernetes.io/pvc-protection"
    ]
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "10Gi"
      }
    },
    "volumeName": "cdiworkspace",
    "storageClassName": "standard",
    "volumeMode": "Block"
  },
  "status": {
    "phase": "Bound",
    "accessModes": [
      "ReadWriteOnce"
    ],
    "capacity": {
      "storage": "10Gi"
    }
  }
}

Result of journalctl -xe _SYSTEMD_UNIT=kubelet.service

Jul 01 09:47:26 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:26.979098   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:40 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:40.979722   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:47:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:47:55.978806   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:08.979375   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:23 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:23.979463   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:37 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:37.979005   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:48:48 rehan-B85M-HD3 kubelet[22759]: E0701 09:48:48.977686   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:02 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:02.979125   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:17 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:17.979408   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:28 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:28.977499   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:41 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:41.977771   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:49:53 rehan-B85M-HD3 kubelet[22759]: E0701 09:49:53.978605   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:05 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:05.980251   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:16 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:16.979292   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:31 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:31.978346   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:42 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:42.979302   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:50:55 rehan-B85M-HD3 kubelet[22759]: E0701 09:50:55.978043   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:08 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:08.977540   22759 pod_workers.go:190] Error syncing pod 6577b694-f18d-4d7b-9a75-82dc17c908ca ("myplanet-d976447c6-dsfx9_default(6577b694-f18d-4d7
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190929   22759 remote_image.go:113] PullImage "friendly/myplanet:0.0.1-SNAPSHOT" from image service failed: rpc error: code = Unknown desc = E
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.190971   22759 kuberuntime_image.go:51] Pull image "friendly/myplanet:0.0.1-SNAPSHOT" failed: rpc error: code = Unknown desc = Error response 
Jul 01 09:51:24 rehan-B85M-HD3 kubelet[22759]: E0701 09:51:24.191024   22759 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon:

Deployment Yaml

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: back
  template:
    metadata:
      labels:
        app: back
    spec:
      containers:
      - name: back
        image: back:latest
        ports:
        - containerPort: 8081
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: back
      volumes:
      - name: back
        hostPath:
          # directory location on host
          path: /back
          # this field is optional
          type: Directory

Dockerfile

FROM python:3.7-stretch

COPY . /code

WORKDIR /code

CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"

RUN pip install -r requirements.txt

ENTRYPOINT ["python", "ingestion.py"]

pyython file1

import os
import shutil
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(name)s - %(message)s')
logger = logging.getLogger("ingestion")

import requests

import datahub

scihub_username = os.environ["scihub_username"]
scihub_password = os.environ["scihub_password"]
result_url = "http://" + os.environ["CDINRW_BASE_URL"] + "/jobs/" + os.environ["CDINRW_JOB_ID"] + "/results"

logger.info("Searching the Copernicus Open Access Hub")
scenes = datahub.search(username=scihub_username,
                        password=scihub_password,
                        producttype=os.getenv("producttype"),
                        platformname=os.getenv("platformname"),
                        days_back=os.getenv("days_back", 2),
                        footprint=os.getenv("footprint"),
                        max_cloud_cover_percentage=os.getenv("max_cloud_cover_percentage"),
                        start_date = os.getenv("start_date"),
                        end_date = os.getenv("end_date"))

logger.info("Found {} relevant scenes".format(len(scenes)))

job_results = []
for scene in scenes:
    # do not donwload a scene that has already been ingested
    if os.path.exists(os.path.join("/out_data", scene["title"]+".SAFE")):
        logger.info("The scene {} already exists in /out_data and will not be downloaded again.".format(scene["title"]))
        filename = scene["title"]+".SAFE"
    else:
        logger.info("Starting the download of scene {}".format(scene["title"]))
        filename = datahub.download(scene, "/tmp", scihub_username, scihub_password, unpack=True)
        logger.info("The download was successful.")
        shutil.move(filename, "/out_data")
    result_message = {"description": "test",
                      "type": "Raster",
                      "format": "SAFE",
                      "filename": os.path.basename(filename)}
    job_results.append(result_message)

res = requests.put(result_url, json=job_results, timeout=60)
res.raise_for_status()

**python file 2 **

import logging
import os
import urllib.parse
import zipfile

import requests

# constructing URLs for querying the data hub
_BASE_URL = "https://scihub.copernicus.eu/dhus/"
SITE = {}
SITE["SEARCH"] = _BASE_URL + "search?format=xml&sortedby=beginposition&order=desc&rows=100&start={offset}&q="
_PRODUCT_URL = _BASE_URL + "odata/v1/Products('{uuid}')/"
SITE["CHECKSUM"] = _PRODUCT_URL + "Checksum/Value/$value"
SITE["SAFEZIP"] = _PRODUCT_URL + "$value"

logger = logging.getLogger(__name__)

def _build_search_url(producttype=None, platformname=None, days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
    search_terms = []
    if producttype:
        search_terms.append("producttype:{}".format(producttype))
    if platformname:
        search_terms.append("platformname:{}".format(platformname))
    if start_date and end_date:
        search_terms.append(
            "beginPosition:[{}+TO+{}]".format(start_date, end_date))
    elif days_back:
        search_terms.append(
            "beginPosition:[NOW-{}DAYS+TO+NOW]".format(days_back))
    if footprint:
        search_terms.append("footprint:%22Intersects({})%22".format(
            footprint.replace(" ", "+")))
    if max_cloud_cover_percentage:
        search_terms.append("cloudcoverpercentage:[0+TO+{}]".format(max_cloud_cover_percentage))
    url = SITE["SEARCH"] + "+AND+".join(search_terms)
    return url


def _unpack(zip_file, directory, remove_after=False):
    with zipfile.ZipFile(zip_file) as zf:
        # This assumes that the zipfile only contains the .SAFE directory at root level
        safe_path = zf.namelist()[0]
        zf.extractall(path=directory)
    if remove_after:
        os.remove(zip_file)
    return os.path.normpath(os.path.join(directory, safe_path))


def search(username, password, producttype=None, platformname=None ,days_back=2, footprint=None, max_cloud_cover_percentage=None, start_date=None, end_date=None):
    """ Search the Copernicus SciHub

    Parameters
    ----------
    username : str
      user name for the Copernicus SciHub
    password : str
      password for the Copernicus SciHub
    producttype : str, optional
      product type to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    platformname : str, optional 
      plattform name to filter for in the query (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    days_back : int, optional
      number of days before today that will be searched. Default are the last 2 days. If start and end date are set the days_back parameter is ignored
    footprint : str, optional
      well-known-text representation of the footprint
    max_cloud_cover_percentage: str, optional
      percentage of cloud cover per scene. Can only be used in combination with Sentinel-2 imagery. 
      (see https://scihub.copernicus.eu/userguide/FullTextSearch#Search_Keywords for allowed values)
    start_date: str, optional
        start point of the search extent has to be used in combination with end_date
    end_date: str, optional
        end_point of the search extent has to be used in combination with start_date

    Returns
    -------
    list
      a list of scenes that match the search parameters
    """

    import xml.etree.cElementTree as ET
    scenes = []
    search_url = _build_search_url(producttype, platformname, days_back, footprint, max_cloud_cover_percentage, start_date, end_date)
    logger.info("Search URL: {}".format(search_url))
    offset = 0
    rowsBreak = 5000
    name_space = {"atom": "http://www.w3.org/2005/Atom",
                  "opensearch": "http://a9.com/-/spec/opensearch/1.1/"}
    while offset < rowsBreak:  # Next pagination page:
        response = requests.get(search_url.format(offset=offset), auth=(username, password))
        root = ET.fromstring(response.content)
        if offset == 0:
            rowsBreak = int(
                root.find("opensearch:totalResults", name_space).text)
        for e in root.iterfind("atom:entry", name_space):
            uuid = e.find("atom:id", name_space).text
            title = e.find("atom:title", name_space).text
            begin_position = e.find(
                "atom:date[@name='beginposition']", name_space).text
            end_position = e.find(
                "atom:date[@name='endposition']", name_space).text
            footprint = e.find("atom:str[@name='footprint']", name_space).text
            scenes.append({
                "id": uuid,
                "title": title,
                "begin_position": begin_position,
                "end_position": end_position,
                "footprint": footprint})
        # Ultimate DHuS pagination page size limit (rows per page).
        offset += 100
    return scenes


def download(scene, directory, username, password, unpack=True):
    """ Download a Sentinel scene based on its uuid

    Parameters
    ----------
    scene : dict
        the scene to be downloaded
    path : str
        the path where the file will be downloaded to
    username : str
        username for the Copernicus SciHub
    password : str
        password for the Copernicus SciHub
    unpack: boolean, optional
        flag that defines whether the downloaded product should be unpacked after download. defaults to true

    Raises
    ------
    ValueError
        if the size of the downloaded file does not match the Content-Length header
    ValueError
        if the checksum of the downloaded file does not match the checksum provided by the Copernicus SciHub

    Returns
    -------
    str
        path to the downloaded file
    """

    import hashlib
    md5hash = hashlib.md5()
    md5sum = requests.get(SITE["CHECKSUM"].format(
        uuid=scene["id"]), auth=(username, password)).text

    download_path = os.path.join(directory, scene["title"] + ".zip")
    # overwrite if path already exists
    if os.path.exists(download_path):
        os.remove(download_path)
    url = SITE["SAFEZIP"].format(uuid=scene["id"])
    rsp = requests.get(url, auth=(username, password), stream=True)
    cl = rsp.headers.get("Content-Length")
    size = int(cl) if cl else -1
    # Actually fetch now:
    with open(download_path, "wb") as f:  # Do not read as a whole into memory:
        written = 0
        for block in rsp.iter_content(8192):
            f.write(block)
            written += len(block)
            md5hash.update(block)
    written = os.path.getsize(download_path)
    if size > -1 and written != size:
        raise ValueError("{}: size mismatch, {} bytes written but expected {} bytes to write!".format(
            download_path, written, size))
    elif md5sum:
        calculated = md5hash.hexdigest()
        expected = md5sum.lower()
        if calculated != expected:
            raise ValueError("{}: MD5 mismatch, calculated {} but expected {}!".format(
                download_path, calculated, expected))
    if unpack:
        return _unpack(download_path, directory, remove_after=False)
    else:
        return download_path

How can i mount the volume properly and automatically onto the pod? i do not want to create the pods manually for each REST service and assign volumes to them

解决方案

i went through the logs of the pod again and realized that the parameters required by python file1 were not being provided and were causing the container to crash. i tested it by providing all the missing parameters pointed out in logs and giving them in deployment.yaml for the pod which looked like this now:

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: back
  template:
    metadata:
      creationTimestamp: 
      labels:
        app: back
    spec:
      containers:
      - name: back
        image: back:latest
        imagePullPolicy: Never
        env:
        - name: scihub_username
          value: test
        - name: scihub_password
          value: test
        - name: CDINRW_BASE_URL
          value: 10.1.40.11:8081/swagger-ui.html
        - name: CDINRW_JOB_ID
          value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
        ports:
        - containerPort: 8081
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: test-volume
      volumes:
      - name: test-volume
        hostPath:
          # directory location on host
          path: /back
          # this field is optional
          type: Directory

This started downloading the data and solved the problem for now however this is not how i want it to run as i want it to be triggered through a REST API which provides all parameters and starts and stops this container. i'll create a separate question for that and link it below for anyone to follow.

这篇关于创建PV和PVC后,容器在minikube中继续为Pod崩溃的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆