如何与詹金斯并行执行多次相同的工作? [英] How to run the same job multiple times in parallel with Jenkins?

查看:329
本文介绍了如何与詹金斯并行执行多次相同的工作?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在测试Jenkins,看它是否适合我们的构建和测试框架.我发现Jenkins及其可用的插件可以满足我们的大多数需求.除了我似乎无法找到有关如何执行一种特定类型任务的帮助.

I'm testing Jenkins to see if it will fit our build and testing framework. I found that Jenkins and its available plugins fit most of our needs. Except that I can't seem to find help on how to do one particular type of task.

我们正在为嵌入式设备创建应用程序.我们需要在这些设备上运行100项测试.如果我们在构建后在一台设备上运行所有测试,那么将需要几个小时才能获得结果.但是,如果我们在100个设备上并行运行测试,则可以在更短的时间内获得结果.

We are creating application for embedded devices. We have 100s of tests that need to be run on these devices. If we run all the tests on one device after a build then it will take several hours to get the results. However, if we run the tests on 100 of the devices in parallel then we can get results in much shorter time.

所有测试的起点都非常相似.使用设备的IP地址调用测试脚本以运行测试,并输入用户名/pw.该脚本将在设备上进行必要的测试,并报告每个测试项目的通过/失败结果.

All the tests will have very similar starting point. A test script is called with IP address of device to run the test on and user name/pw. The script would do the necessary test on the device and report back pass/fail for each test item.

我认为,这样做的漫长/痛苦的方式是在Jenkins中编写100个作业,每个作业将直接是不同的测试脚本(具有上述参数),并使用可用的插件并行运行这些作业.但是,从长远来看,维持所有这些工作将非常困难.

I think the long/painful way of doing this is writing 100 jobs in Jenkins, each will be a different test script directly (with above parameters) and run these in parallel using available plugins. However, maintaining all these jobs will be very difficult in the long run.

因此,更好的方法是创建一个Job(可以将其称为child_tester),该Job可以采用以下参数:测试脚本名称,设备的IP地址,用户名/密码等.然后使用另一个作业(我们将其称为mother_tester)使用不同的IP地址调用child_tester作业100次,然后并行运行它们.我需要某种方式来累积child_tester作业的每个单独运行的所有测试结果,并将其报告给mother_tester.

So, the better way to do this would be to create a Job (let's call it child_tester) that can take parameters such as: test script name, IP address of device, user name/pw, etc. Then use another job (let's call it mother_tester) to call child_tester job 100 times with different IP addresses and run them in parallel. I would need some way of accumulating all the test results of each individual run of the child_tester jobs and report them back to mother_tester.

我的问题是Jenkins中是否有插件或完成此任务的任何方式?我研究了名为"Build Flow","Parallel Test Executor"和"Parameterized Trigger"的插件的信息.但是,它们似乎不符合我的需求.

My question is there a plugin or any way of accomplishing this in Jenkins? I have looked into the information of the plugins called "Build Flow", "Parallel Test Executor", and "Parameterized Trigger". However, they don't seem to fit my needs.

推荐答案

我了解您已经研究了Build Flow插件,但是我不确定为什么您不使用它.也许您可以指出我的建议中的漏洞.

I understand you've looked into the Build Flow plugin, but I'm not sure why you've dismissed it. Perhaps you can point out the holes in my proposal.

假设您的系统中有足够的执行程序来并行运行作业,我认为 Build Flow Test Aggregator插件可以完成您想做的事情.

Assuming you have enough executors in your system to run jobs in parallel, I think that the Build Flow plugin and Build Flow Test Aggregator plugin can do what you want.

  • Build Flow插件支持运行并行作业.我看不出Build Flow无法安排您的子"作业与不同参数并行运行的任何原因.

  • The Build Flow plugin supports running jobs in parallel. I don't see any reason why Build Flow could not schedule your "child" job to run in parallel with different parameters.

构建流程测试聚合器"从构建流程"作业的预定构建中获取测试结果,因此您的子"作业将需要发布自己的测试结果.

The Build Flow Test Aggregator grabs test results from the scheduled builds of a Build Flow job, so your "child" job will need to publish its own test results.

您将需要配置子"作业,以便可以通过在作业配置中选中如有必要,执行并发构建"来并行运行.

You will need to configure your "child" job so that it can run in parallel by checking the "Execute concurrent builds if necessary" in the job configuration.

无论哪一组从站提供与嵌入式设备的连接,都将需要足够的执行程序来并行运行您的作业.

Whatever set of slaves provide the connection to the embedded devices will need enough executors to run your jobs in parallel.

更新:具有简单的构建流程定义:

Update: with the simple Build Flow definition:

parallel (
  { build("dbacher flow child", VALUE: 1) },
  { build("dbacher flow child", VALUE: 2) },
  { build("dbacher flow child", VALUE: 3) },
  { build("dbacher flow child", VALUE: 4) }
)

我得到输出:

parallel {
    Schedule job dbacher flow child
    Schedule job dbacher flow child
    Schedule job dbacher flow child
    Schedule job dbacher flow child
    Build dbacher flow child #5 started
    Build dbacher flow child #6 started
    Build dbacher flow child #7 started
    Build dbacher flow child #8 started
    dbacher flow child #6 completed 
    dbacher flow child #7 completed 
    dbacher flow child #5 completed 
    dbacher flow child #8 completed 
}

作业历史记录显示所有四个作业都在几秒钟之内排定.但是作业构建步骤包含人为延迟(睡眠),这将阻止任何单个构建快速完成该工作.

The job history shows that all four jobs are scheduled within seconds of each other. But the job build step contains an artificial delay (sleep) that would prevent any single build from completing that quickly.

更新2 :以下是从另一个数据结构动态生成并行任务列表的示例:

Update 2: Here is an example of generating the list of parallel tasks dynamically from another data structure:

// create a closure for the deploy job for each server 
def paramValues = (1..4)
def testJobs = [] 
for (param in paramValues) { 
  def jobParams = [VALUE: param] 
  def testJob = { 
    // call build 
    build(jobParams, "dbacher flow child") 
  } 
  println jobParams
  testJobs.add(testJob) 
} 

parallel(testJobs)

传递给parallel的列表是闭包的列表,这些闭包使用唯一的参数调用构建.我必须确保在闭包函数之外定义作业参数,以确保分别安排作业.

The list passed to parallel is a list of closures that call the build with unique parameters. I had to make sure to define the job parameters outside of the closure function to ensure the jobs would be scheduled separately.

我从另一个 answer 查看全文

登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆