Locust.io:控制每秒请求数参数 [英] Locust.io: Controlling the request per second parameter

查看:47
本文介绍了Locust.io:控制每秒请求数参数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在尝试使用 Locust.io 在 EC2 计算优化实例上对我的 API 服务器进行负载测试.它提供了一个易于配置的选项,用于设置连续请求等待时间并发用户数.理论上,rps = 等待时间 X #_users.然而,在测试时,这个规则在 #_users 非常低的阈值下失效(在我的实验中,大约 1200 个用户).变量hatch_rate#_of_slaves,包括分布式测试设置,对rps几乎没有影响.

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.

实验信息

该测试是在具有 16 个 vCPU、通用 SSD 和 30GB RAM 的 C3.4x AWS EC2 计算节点(AMI 映像)上完成的.在测试期间,CPU 利用率最高为 60%(取决于孵化率 - 控制产生的并发进程),平均保持在 30% 以下.

The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.

Locust.io

setup:使用 pyzmq,并以每个 vCPU 内核作为从属进行设置.单个 POST 请求设置,请求正文约 20 字节,响应正文约 25 字节.请求失败率:<1%,平均响应时间为 6ms.

setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.

变量:连续请求之间的时间设置为 450 毫秒(最小值:100 毫秒,最大值:1000 毫秒),孵化率以舒适的每秒 30 次,以及 RPS 测量的变化#_users.

variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.

RPS 遵循为最多 1000 个用户预测的方程式.之后增加 #_users 收益递减,上限达到大约 1200 名用户.#_users 这里不是自变量,改变等待时间也会影响 RPS.但是,将实验设置更改为 32 核实例(c3.8x 实例)或 56 核(在分布式设置中)根本不会影响 RPS.

The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.

那么说真的,控制RPS的方法是什么?有什么明显的我在这里遗漏了吗?

So really, what is the way to control the RPS? Is there something obvious I am missing here?

推荐答案

(此处是 Locust 作者之一)

(one of the Locust authors here)

首先,为什么要控制RPS?Locust 背后的核心思想之一是描述用户行为并让它产生负载(在您的情况下是请求).Locust 旨在回答的问题是:我的应用程序可以支持多少并发用户?

First, why do you want to control the RPS? One of the core ideas behind Locust is to describe user behavior and let that generate load (requests in your case). The question Locust is designed to answer is: How many concurrent users can my application support?

我知道追求某个 RPS 数字很诱人,有时我也会通过追求任意 RPS 数字来欺骗".

I know it is tempting to go after a certain RPS number and sometimes I "cheat" as well by striving for an arbitrary RPS number.

但是要回答您的问题,您确定您的 Locusts 不会陷入死锁吗?比如,他们完成了一定数量的请求,然后因为没有其他任务要执行而变得空闲?不看测试代码很难判断发生了什么.

But to answer your question, are you sure your Locusts doesn't end up in a dead lock? As in, they complete a certain number of requests and then become idle because they have no other task to perform? Hard to tell what's happening without seeing the test code.

建议将分布式模式用于较大的生产设置,并且我运行的大多数实际负载测试都在多个但较小的实例上进行.但是,如果您没有最大限度地利用 CPU,那应该没关系.您确定没有使单个 CPU 内核饱和吗?不确定您正在运行什么操作系统,但如果是 Linux,您的负载值是多少?

Distributed mode is recommended for larger production setups and most real-world load tests I've run have been on multiple but smaller instances. But it shouldn't matter if you are not maxing out the CPU. Are you sure you are not saturating a single CPU core? Not sure what OS you are running but if Linux, what is your load value?

这篇关于Locust.io:控制每秒请求数参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆