在任务场景之间共享数据 [英] Share data between gatling scenarios

查看:60
本文介绍了在任务场景之间共享数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个方案,借助包含用户名和密码的CSV文件,获得会话ID并使用saveAs保存它们.

I have a scenario that, with the help of a CSV file containing usernames and passwords, obtains session Ids and saves them using saveAs.

我希望能够在下面的场景中使用那些会话ID,该场景执行一些需要会话ID的操作.此外,我还想将会话ID与他们的用户名相关联.

I want to be able to use those session IDs in a following scenario that performs a few actions which need session Ids. In addition, I also would like to correlate the session Ids with their usernames.

因此,从本质上讲,我正在尝试从其余操作中对登录操作(获取会话ID)进行顺序化.加特林有可能吗?如果是这样,如何在场景之间传递数据?

So essentially, I am trying to sequentialize the login operations (obtaining session IDs) from the rest of the operations. Is that possible in gatling? If so, how do I pass data between scenarios?

推荐答案

我意识到这个问题很旧,但是我出于自身的原因研究类似问题时碰到了这个问题,并认为我会分享我所遇到的解决方案其他人遇到类似的问题.我的情况并不完全相似,但是我问题的核心是在并行运行的两个场景之间传递数据,因此我希望答案将来可能对其他人有价值,即使从技术上讲,它只能回答原始问题的一半.问题.

I realize this question is old, but I came across it while researching a similar problem for my own sake, and thought I would share the solution I reached in case others come across similar issues. My situation was not entirely similar, but the core essence of my problem was passing data between the two scenarios running in parallell, so I hope the answer may have some value for others in the future, even though it technically only answers half of the original question.

以下设置显示了这两种方案如何一起运行的总体思路,其中第二种方案以一定的延迟开始,以确保在方案1中已生成数据:

The following setUp shows the general idea of how the two scenarios ran together, where the second scenario started with a delay to ensure that data had been generated in scenario1:

setUp(
  scenario1.inject(constantUsersPerSecond(0.5) during (10 minutes)),
  scenario2.inject(nothingFor(3 minutes), constantUsersPerSecond(0.1) during (7 minutes))
).protocols(httpProtocol)

可以简单地合并这两种方案,但是由于它们的大小,我将这两种方案保留在两个单独的类中,因为这两种方案都由一长串执行步骤组成,并且由于它们将需要与不同的注射曲线并行运行.场景2所需的数据是在场景1中生成的,并存储在其会话中.

Simply merging the two scenarios would have been possible, but I kept the two Scenarios defined in two separate classes due to the size of them as both scenarios consisted of a long chain of exec-steps, and due to the fact that they would need to run in parallel with different injection profiles. The data necessary in scenario 2 was generated in scenario 1 and stored in its session.

为了将数据从一个方案传递到另一个方案,我创建了一个对象,除了持有一个

In order to pass data from one scenario to the other I created an object that did absolutely nothing besides holding a single LinkedBlockingDeque item. I settled for this collection-type to hopefully be able to avoid any concurrency issues when running tests with high loads.

import java.util.concurrent.LinkedBlockingDeque

object DequeHolder {
  val DataDeque = new LinkedBlockingDeque[String]()
}

在场景一中,然后在场景中每次成功循环结束时,我都将值保存到此双端队列:

In scenario one, I then saved values to this deque at the end of each successful loop through the scenario:

val saveData = exec{ session =>
  DataDequeHolder.DataDeque.offerLast(session("data").as[String])
  session
}

val scenario1 = scenario("Scenario 1")
.exec(
  step1,
  step2,
  .....
  stepX,
  saveData
)

最后,在第二个方案中,我创建了一个自定义馈送器,它从LinkBlockingDeque中检索了数据,并像使用其他馈送器一样使用了该馈送器:

And finally in Scenario two I created a custom feeder that retrieved the data from the LinkBlockingDeque, and used this feeder as you would any other feeder:

class DataFeeder extends Feeder[String] {
  override def hasNext: Boolean = DataDequeHolder.DataDeque.size() > 0
  override def next(): Map[String, String] = Map("data" -> DataDequeHolder.DataDeque.takeFirst())
}

val scenario2 = scenario("Scenario 2")
.feed(new DataFeeder())
.exec(
  step1,
  step2,
  .....
  stepX,
)

到目前为止,这已被证明是一种可靠的方式,可以在两个方案之间传递数据而不会遇到并发问题.但是,值得注意的是,由于后端系统运行一些非常繁重的操作,并且不打算与成千上万的并发用户一起运行,因此我并未以高负载运行它.我不知道这在高负载的系统中如何运作.

This has so far proved to be a reliable way to pass the data across the two scenarios without running into concurrency issues. It is however worth noting that I have not run this with high loads, as my backend-system runs some very heavy operations, and is not intended to run with thousands of concurrent users. I do not know how this will function as well with systems under high loads.

这篇关于在任务场景之间共享数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆