Kubernetes 中的 Cron 作业 - 连接到现有的 Pod,执行脚本 [英] Cron Jobs in Kubernetes - connect to existing Pod, execute script

查看:26
本文介绍了Kubernetes 中的 Cron 作业 - 连接到现有的 Pod,执行脚本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我确定我遗漏了一些明显的东西.我已经查看了 Kubernetes 上 ScheduledJobs/CronJobs 的文档,但找不到按计划执行以下操作的方法:

I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule:

  1. 连接到现有的 Pod
  2. 执行脚本
  3. 断开连接

我有其他方法可以做到这一点,但他们感觉不对.

I have alternative methods of doing this, but they don't feel right.

  1. 安排一个 cron 任务: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1)/path/to/script

  1. Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script

创建一个部署,它有一个Cron Pod",它也容纳应用程序,以及许多非 Cron Pod",它们只是应用程序.Cron Pod 将使用不同的图像(一个有计划的 cron 任务).

Create one deployment that has a "Cron Pod" which also houses the application, and many "Non Cron Pods" which are just the application. The Cron Pod would use a different image (one with cron tasks scheduled).

如果可能的话,我更愿意使用 Kubernetes ScheduledJobs 来防止同一个 Job 一次运行多次,而且我觉得它是更合适的方式.

I would prefer to use the Kubernetes ScheduledJobs if possible to prevent the same Job running multiple times at once and also because it strikes me as the more appropriate way of doing it.

有没有办法通过 ScheduledJobs/CronJobs 做到这一点?

Is there a way to do this by ScheduledJobs / CronJobs?

http://kubernetes.io/docs/user-guide/cron-jobs/

推荐答案

据我所知,没有官方"的方式可以按照您的意愿进行操作,我相信设计就是如此.Pods 应该是短暂的和水平可扩展的,而 Jobs 被设计为退出.将 cron 作业附加"到现有 pod 不适合该模块.调度程序不知道作业是否完成.

As far as I'm aware there is no "official" way to do this the way you want, and that is I believe by design. Pods are supposed to be ephemeral and horizontally scalable, and Jobs are designed to exit. Having a cron job "attach" to an existing pod doesn't fit that module. The Scheduler would have no idea if the job completed.

相反,Job 可以启动一个专门用于运行 Job 的应用程序实例,然后在 Job 完成后将其关闭.为此,您可以为作业使用与部署相同的图像,但通过设置 command: 使用不同的入口点".

Instead, a Job can to bring up an instance of your application specifically for running the Job and then take it down once the Job is complete. To do this you can use the same Image for the Job as for your Deployment but use a different "Entrypoint" by setting command:.

如果他们的工作需要访问由您的应用程序创建的数据,那么该数据将需要保存在应用程序/Pod 之外,您可以通过几种方式来实现,但显而易见的方式是数据库或持久卷.例如,使用数据库看起来像这样:

If they job needs access to data created by your application then that data will need to be persisted outside the application/Pod, you could so this a few ways but the obvious ways would be a database or a persistent volume. For example useing a database would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          env:
            - name: DB_HOST
              value: "127.0.0.1"
            - name: DB_DATABASE
              value: "app_db"

以及连接到同一数据库但具有不同入口点"的作业:

And a job that connects to the same database, but with a different "Entrypoint" :

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        env:
          - name: DB_HOST
            value: "127.0.0.1"
          - name: DB_DATABASE
            value: "app_db"

或者持久卷方法看起来像这样:

Or the persistent volume approach would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          volumeMounts:
          - mountPath: "/var/www/html"
            name: APP-VOLUME
      volumes:
        - name:  APP-VOLUME
          persistentVolumeClaim:
            claimName: APP-CLAIM

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: APP-VOLUME
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /app

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: APP-CLAIM
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      service: app

使用这样的作业,附加到同一个卷:

With a job like this, attaching to the same volume:

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        volumeMounts:
        - mountPath: "/var/www/html"
          name: APP-VOLUME
    volumes:
      - name:  APP-VOLUME
        persistentVolumeClaim:
          claimName: APP-CLAIM

这篇关于Kubernetes 中的 Cron 作业 - 连接到现有的 Pod,执行脚本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆