Kubernetes中的Cron Jobs-连接到现有的Pod,执行脚本 [英] Cron Jobs in Kubernetes - connect to existing Pod, execute script

查看:83
本文介绍了Kubernetes中的Cron Jobs-连接到现有的Pod,执行脚本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我敢肯定我缺少明显的东西.我已经浏览了Kubernetes上ScheduledJobs/CronJobs的文档,但是我找不到在时间表上执行以下操作的方法:

I'm certain I'm missing something obvious. I have looked through the documentation for ScheduledJobs / CronJobs on Kubernetes, but I cannot find a way to do the following on a schedule:

  1. 连接到现有的Pod
  2. 执行脚本
  3. 断开连接

我有其他替代方法,但是他们感觉不对.

I have alternative methods of doing this, but they don't feel right.

  1. 计划以下任务的cron任务:kubectl exec -it $(kubectl get pods --selector = some-selector | head -1)/path/to/script

  1. Schedule a cron task for: kubectl exec -it $(kubectl get pods --selector=some-selector | head -1) /path/to/script

创建一个部署,该部署具有一个"Cron Pod"(也容纳该应用程序)和许多"Non Cron Pod"(仅作为应用程序). Cron Pod将使用其他图像(已计划cron任务的图像).

Create one deployment that has a "Cron Pod" which also houses the application, and many "Non Cron Pods" which are just the application. The Cron Pod would use a different image (one with cron tasks scheduled).

如果可能的话,我宁愿使用Kubernetes ScheduledJobs来防止同一作业同时运行多次,并且因为这样做更适合我.

I would prefer to use the Kubernetes ScheduledJobs if possible to prevent the same Job running multiple times at once and also because it strikes me as the more appropriate way of doing it.

ScheduledJobs/CronJobs是否有办法做到这一点?

Is there a way to do this by ScheduledJobs / CronJobs?

http://kubernetes.io/docs/user-guide/cron-jobs /

推荐答案

据我所知,没有官方"方式可以按照您想要的方式进行操作,而我相信这是设计使然. Pod应该是临时的并且可以水平扩展,而Jobs旨在退出.将cron作业附加"到现有吊舱不适合该模块.调度程序不知道任务是否完成.

As far as I'm aware there is no "official" way to do this the way you want, and that is I believe by design. Pods are supposed to be ephemeral and horizontally scalable, and Jobs are designed to exit. Having a cron job "attach" to an existing pod doesn't fit that module. The Scheduler would have no idea if the job completed.

相反,Job可以调出专门用于运行Job的应用程序实例,然后在Job完成后将其删除.为此,您可以为作业使用与部署相同的映像,但通过设置command:使用不同的入口点".

Instead, a Job can to bring up an instance of your application specifically for running the Job and then take it down once the Job is complete. To do this you can use the same Image for the Job as for your Deployment but use a different "Entrypoint" by setting command:.

如果他们的工作需要访问由您的应用程序创建的数据,则该数据将需要保存在应用程序/Pod之外,您可以采用几种方法,但显而易见的方法是数据库或持久卷. 例如,使用数据库看起来像这样:

If they job needs access to data created by your application then that data will need to be persisted outside the application/Pod, you could so this a few ways but the obvious ways would be a database or a persistent volume. For example useing a database would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          env:
            - name: DB_HOST
              value: "127.0.0.1"
            - name: DB_DATABASE
              value: "app_db"

和连接到相同数据库但具有不同"Entrypoint"的作业:

And a job that connects to the same database, but with a different "Entrypoint" :

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        env:
          - name: DB_HOST
            value: "127.0.0.1"
          - name: DB_DATABASE
            value: "app_db"

或者持久卷方法看起来像这样:

Or the persistent volume approach would look something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: APP
spec:
  template:
    metadata:
      labels:
        name: THIS
        app: THAT
    spec:
      containers:
        - image: APP:IMAGE
          name: APP
          command:
          - app-start
          volumeMounts:
          - mountPath: "/var/www/html"
            name: APP-VOLUME
      volumes:
        - name:  APP-VOLUME
          persistentVolumeClaim:
            claimName: APP-CLAIM

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: APP-VOLUME
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /app

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: APP-CLAIM
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      service: app

使用这样的作业,将其附加到相同的卷上:

With a job like this, attaching to the same volume:

apiVersion: batch/v1
kind: Job
metadata:
  name: APP-JOB
spec:
  template:
    metadata:
      name: APP-JOB
      labels:
        app: THAT
    spec:
      containers:
      - image: APP:IMAGE
        name: APP-JOB
        command:
        - app-job
        volumeMounts:
        - mountPath: "/var/www/html"
          name: APP-VOLUME
    volumes:
      - name:  APP-VOLUME
        persistentVolumeClaim:
          claimName: APP-CLAIM

这篇关于Kubernetes中的Cron Jobs-连接到现有的Pod,执行脚本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆