Locust.io:控制每秒请求数 [英] Locust.io: Controlling the request per second parameter

查看:960
本文介绍了Locust.io:控制每秒请求数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在尝试使用Locust.io在EC2计算优化实例上对我的API服务器进行负载测试.它提供了易于配置的选项,用于设置连续请求等待时间并发用户数.理论上, rps = 等待时间 X #_ users .但是,在测试过程中,此规则针对 #_ users 的极低阈值(在我的实验中,大约有1200个用户)崩溃了.变量 hatch_rate #_ of_slaves (包括在分布式测试设置中)对 rps 几乎没有影响.

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.

实验信息

该测试已在具有16个vCPU,带有常规SSD和30GB RAM的C3.4x AWS EC2计算节点(AMI映像)上完成.在测试过程中,CPU利用率最高达到60%(取决于孵化率-控制孵化的并发进程),平均保持在30%以下.

The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.

Locust.io

setup:使用pyzmq,并以每个vCPU内核作为从设备进行设置.单个POST请求设置,请求正文〜20个字节,响应正文〜25个字节.请求失败率:< 1%,平均响应时间为6ms.

setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.

变量:连续请求之间的时间设置为450毫秒(最小:100毫秒,最大:1000毫秒),孵化率(每秒30舒适)和 RPS (通过改变 #_ users 来测量) em>.

variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.

RPS遵循针对多达1000个用户的预测公式.之后,增加 #_ users 的收益会逐渐减少,上限约为1200个用户. #_ users 不是自变量,更改等待时间也会影响RPS.但是,将实验设置更改为32核实例(c3.8x实例)或56核(在分布式设置中)根本不会影响RPS.

The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.

那么,实际上,控制RPS的方式是什么?有什么明显的我想念的地方吗?

So really, what is the way to control the RPS? Is there something obvious I am missing here?

推荐答案

(这里是蝗虫的作者之一)

(one of the Locust authors here)

首先,为什么要控制RPS? Locust的核心思想之一是描述用户行为,并使其产生负载(在您的情况下为请求).蝗虫旨在回答的问题是:我的应用程序可以支持多少个并发用户?

First, why do you want to control the RPS? One of the core ideas behind Locust is to describe user behavior and let that generate load (requests in your case). The question Locust is designed to answer is: How many concurrent users can my application support?

我知道追求某个RPS号码很诱人,有时我也会通过争取任意RPS号码来作弊".

I know it is tempting to go after a certain RPS number and sometimes I "cheat" as well by striving for an arbitrary RPS number.

但是要回答您的问题,您确定蝗虫不会陷入僵局吗?就像它们一样,它们完成一定数量的请求,然后由于没有其他任务要执行而变得闲置吗?不看测试代码就很难说是怎么回事.

But to answer your question, are you sure your Locusts doesn't end up in a dead lock? As in, they complete a certain number of requests and then become idle because they have no other task to perform? Hard to tell what's happening without seeing the test code.

建议将分布式模式用于较大的生产设置,并且我运行的大多数实际负载测试都在多个但较小的实例上进行.但是,如果不使CPU达到最大使用能力也没关系.您确定您没有饱和单个CPU内核吗?不确定正在运行什么操作系统,但是如果是Linux,则您的负载值是多少?

Distributed mode is recommended for larger production setups and most real-world load tests I've run have been on multiple but smaller instances. But it shouldn't matter if you are not maxing out the CPU. Are you sure you are not saturating a single CPU core? Not sure what OS you are running but if Linux, what is your load value?

这篇关于Locust.io:控制每秒请求数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆