如何在 1 秒内发送 4000 多个请求? [英] How to send 4000+ requests in exactly 1 second?

查看:30
本文介绍了如何在 1 秒内发送 4000 多个请求?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 HTTP GET 请求.我需要在 1 秒内将请求发送到应用服务器超过 4000 次.

I have an HTTP GET request. I need to send the request to the application server for more than 4000 times exactly in 1 second.

我使用 JMeter 发送这些请求.我每次都使用嗅探器工具 (Wireshark) 为每个测试进行了空灵的跟踪.

I'm sending these requests using JMeter. I have taken ethereal traces every time for each test using a sniffer tool (Wireshark).

我尝试通过一台机器、多台机器(并行)甚至分布式模式来实现这一点.

I have tried to achieve this from one machine, multiple machines (parallel) and even distributed mode.

实际上,JMeter 的结果不是我关心的.此测试的关注点是在嗅探器工具中看到 4000 个请求在一秒钟内到达服务器.

Actually, JMeter results are not my concern here. The concern of this test is to see that 4000 requests are hitting the server in one second at the sniffer tool.

我在使用以下 JMeter 测试计划时在 1 sec 中发现了几乎 2500 请求.

I have found almost 2500 request in 1 sec in ethereal trace while using the following JMeter test plan.

Number of Threads= 4000
Ramp-Up Periods = 0 (Though it is depricated)
Loop count= 1

当我使用 2500 的线程数时,我几乎在 1 秒内收到了 2200 个请求 在空灵的跟踪中击中了服务器.

When I use the number of threads as 2500, I got almost 2200 request hitting the server in one second in the ethereal trace.

服务器对该请求的响应在这里不是我关心的.我只想确保 JMeter 发送的 4000 请求在一秒钟内到达应用服务器.

Response from the server for that request is not my concern here. I just want to make sure that 4000 request sent by JMeter is hitting the application server in one second.

案例 1:(4000 个线程)

Number of Threads= 4000
Ramp-Up Periods = 0 
Loop count= 1

案例 1 的输出:

JMeter(查看表格中的结果):启动 4000 个请求需要 2.225 秒.

JMeter (View Results in Table): 2.225 seconds to start 4000 requests.

Ethereal trace:4000 个请求访问服务器需要 4.12 秒.

Ethereal trace: 4.12 seconds for 4000 requests to hit the server.

案例 2:(3000 个线程)

JMeter(查看表格中的结果):1.83 秒启动 3000 个请求.

JMeter (View Results in Table): 1.83 seconds to start 3000 requests.

Ethereal trace:3000 个请求访问服务器需要 1.57 秒.

Ethereal trace: 1.57 seconds for 3000 requests to hit the server.

案例 3:(2500 个线程)

JMeter(查看表格中的结果):启动 2500 个请求需要 1.36 秒.

JMeter (View Results in Table): 1.36 seconds to start 2500 requests.

Ethereal trace:2500 个请求访问服务器需要 2.37 秒.

Ethereal trace: 2.37 seconds for 2500 requests to hit the server.

案例 4:(2000 个线程)

JMeter(查看表格中的结果):启动 2000 个请求需要 0.938 秒.

JMeter (View Results in Table): 0.938 seconds to start 2000 requests.

Ethereal trace:2000 个访问服务器的请求需要 1.031 秒.

Ethereal trace: 1.031 seconds for 2000 requests to hit the server.

I have run these test from only one machine. 
No listeners added.
Non-Gui mode.
No assertions in my scripts.
Heap size: 8GB

所以,我不明白为什么我的 JMeter 结果和虚幻的痕迹彼此不同.我也尝试过使用 Synchronizing Timer 来实现这个场景.

So, I don't understand why my JMeter Results and ethereal traces differ from each other. I've also tried with Synchronizing Timer to achieve this scenario.

由于 4000 个线程太重,也许我必须在分布式模式下测试它.我也尝试过分布式模式(1 个主站,2 个从站).也许我的脚本是错误的.

Since 4000 Threads is too heavy, maybe I have to test this in Distributed mode. I've also tried with distributed mode (1 master, 2 slaves). Maybe my script is wrong.

是否有可能在空灵的跟踪中看到我的 4000 个请求在 1 秒内到达了服务器?

Is it possible to see in the ethereal trace that my 4000 requests hit the server in 1 second?

分布式模式下实现这个场景的JMeter脚本是什么?

What will be the JMeter script to achieve this scenario in distributed mode?

推荐答案

如何从服务器是否正确配置开始以避免这种负载.请求可以是任何类型.如果它们是静态请求,那么请努力确保由于缓存策略或架构,这些请求的绝对最小数量到达您的源服务器,例如

How about starting with whether the server is configured correctly to avoid such load. Requests can be of any type. If they are of static requests then work to ensure that the absolute minimum number of these hit your origin server due to caching policies or architecture, such as

  • 如果您有回访用户但没有 CDN,请确保您的缓存策略存储在客户端,并在您的构建计划中到期.这避免了回访者的重复请求
  • 如果您没有回头客,也没有 CDN,请确保您的缓存策略设置为至少 120% 的最大页到页延迟在您的日志中可见,对于给定的用户集
  • 如果您有 CDN,请确保所有静态请求标头、301 &404 标头设置为允许您的 CDN 缓存您的请求,以便随着您的新构建推送计划到期.
  • 如果您没有 CDN,请考虑一种模型,将所有静态资源放置在专用服务器上,该服务器上的所有内容都标记为在客户端进行高级别缓存.您还可以使用 varnish 或 squid 作为缓存代理来承担该服务器的负载

最终我会怀疑设计问题与如此高的一致请求级别有关.每秒 4000 个请求变为每小时 14,400,000 个请求,每 24 小时变为 345,600,000 个请求.

Utlimately I would suspect a design issue at play with this high a consistent request level. 4000 requests per second becomes 14,400,000 requests per hour and 345,600,000 per 24 hour period.

基于流程,我还建议至少使用三个负载生成器:两个用于主要负载,一个用于单个虚拟用户|线程的控制虚拟用户,用于您的业务流程.在您当前的所有负载生成器模型中,您没有控制元素来确定负载生成器潜在过载所带来的开销.控制元件的使用将帮助您确定负载生成器是否在负载驱动中施加了偏斜.从本质上讲,您的资源消耗很大,这会在负载生成器上增加速度中断.在您的负载生成器上采用深思熟虑的欠载理念.当有人因缺乏控制元素而攻击您的测试,然后您需要重新运行测试时,添加另一个负载生成器比政治资本的费用更便宜.它也远比追逐一个看起来是一个缓慢的系统但实际上是一个过载的负载生成器的工程幽灵便宜得多

On a process basis, I would also suggest a minimum of three load generators: Two for primary load and one for a control virtual user of a single virtual user|thread for your business process. In your current model for all on one load generator you have no control element to determine the overhead imposed by the potential overload of your load generator. The use of the control element will help you determine if you have a load generator imposed skew in your driving of load. Essentially, you have a resource exhausting which is adding a speed break on your load generator. Go for a deliberate underload philosophy on your load generators. Adding another load generator is cheaper than the expense of political capital when someone attacks your test for lack of a control element and then you need to re-run your test. It is also far less expensive than chasing an engineering ghost which appears as a slow system but which is really an overloaded load generator

这篇关于如何在 1 秒内发送 4000 多个请求?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆