如何在1秒钟内发送4000多个请求? [英] How to send 4000+ requests in exactly 1 second?

查看:798
本文介绍了如何在1秒钟内发送4000多个请求?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 HTTP GET请求。我需要在1秒内将请求发送到应用程序服务器超过 4000 次。



我' m使用JMeter发送这些请求。我每次使用嗅探器工具( Wireshark )时都会使用空灵痕迹。



我试过从一台机器,多台机器(并行)甚至分布式模式实现这一目标。



实际上,JMeter结果不是我关注的问题。此测试的关注点是,在嗅探器工具中, 4000 请求在一秒钟内就会命中服务器。



在使用以下JMeter测试计划时,我在 1秒中发现了几乎 2500 的请求。

 线程数= 4000 
加速周期= 0(虽然它已被删除)
循环计数= 1

当我使用线程数 2500 ,我差不多 2200请求在空灵追踪中一秒钟内命中服务器。



来自服务器的响应对于那个请求不是我关心的问题。我只想确保 JMeter 发送的 4000 请求在一秒钟内命中应用服务器。



更新:



案例1:(4000个主题)

 线程数= 4000 
加速期= 0
循环次数= 1

案例1的输出:


JMeter(查看表格中的结果):启动4000个请求的2.225秒。



Ethereal trace :4000次请求命中服务器4.12秒。


来实现这种情况。



由于4000线程太重,可能我必须在分布式模式下测试它。我也尝试过分布式模式(1个主机,2个从机)。也许我的脚本是错误的。



是否有可能在空灵追踪中看到我的4000个请求在1秒内到达服务器?



在分布式模式下实现这种情况的JMeter脚本是什么?

解决方案

如何开始是否正确配置服务器以避免此类负载。请求可以是任何类型。如果它们是静态请求,则确保由于缓存策略或体系结构而确保这些绝对最小数量命中您的源服务器,例如




  • 如果您有返回用户且没有CDN,请确保您的缓存策略存储在客户端,并使您的构建计划到期。这可以避免返回访问者的重复请求

  • 如果您没有返回用户且没有CDN,请确保您的缓存策略设置为至少120%的最大页面到页面延迟可见给定用户集的日志

  • 如果您有CDN,请确保所有静态请求标头,301& 404标头设置为允许您的CDN使用新的构建推送计划缓存您的请求到期。

  • 如果您没有CDN,请考虑将所有静态资源放置在专用服务器上的模型,其中该服务器上的所有内容都标记为在客户端以高级别进行缓存。您还可以将一个服务器与varnish或squid作为缓存代理来承担负载



我会怀疑设计问题在玩这个高一致的请求级别。每秒4000个请求每小时变为14,400,000个请求,每24小时变为345,600,000个请求。



在进程的基础上,我还建议至少使用三个负载生成器:两个用于主负载,一个用于单个虚拟用户的控制虚拟用户|您的业​​务流程。在一个负载生成器上的所有当前模型中,您没有控制元素来确定负载生成器潜在过载所带来的开销。控制元件的使用将帮助您确定在负载驱动中是否存在负载发生器施加的偏斜。从本质上讲,您的资源耗尽,这会在您的负载生成器上添加速度中断。在负载生成器上寻找有意识的欠载原理。当有人因缺少控制元素而攻击您的测试然后您需要重新运行测试时,添加另一个负载生成器比政治资本的费用便宜。它也比追逐工程重影要便宜得多,工程重影看起来像一个慢速系统,但实际上是一个负载过重的负载发生器


I have an HTTP GET request. I need to send the request to the application server for more than 4000 times exactly in 1 second.

I'm sending these requests using JMeter. I have taken ethereal traces every time for each test using a sniffer tool (Wireshark).

I have tried to achieve this from one machine, multiple machines (parallel) and even distributed mode.

Actually, JMeter results are not my concern here. The concern of this test is to see that 4000 requests are hitting the server in one second at the sniffer tool.

I have found almost 2500 request in 1 sec in ethereal trace while using the following JMeter test plan.

Number of Threads= 4000
Ramp-Up Periods = 0 (Though it is depricated)
Loop count= 1

When I use the number of threads as 2500, I got almost 2200 request hitting the server in one second in the ethereal trace.

Response from the server for that request is not my concern here. I just want to make sure that 4000 request sent by JMeter is hitting the application server in one second.

UPDATE:

Case 1: (4000 Threads)

Number of Threads= 4000
Ramp-Up Periods = 0 
Loop count= 1

Output for Case 1:

JMeter (View Results in Table): 2.225 seconds to start 4000 requests.

Ethereal trace: 4.12 seconds for 4000 requests to hit the server.

Case 2: (3000 Threads)

JMeter (View Results in Table): 1.83 seconds to start 3000 requests.

Ethereal trace: 1.57 seconds for 3000 requests to hit the server.

Case 3: (2500 Threads)

JMeter (View Results in Table): 1.36 seconds to start 2500 requests.

Ethereal trace: 2.37 seconds for 2500 requests to hit the server.

Case 4: (2000 Threads)

JMeter (View Results in Table): 0.938 seconds to start 2000 requests.

Ethereal trace: 1.031 seconds for 2000 requests to hit the server.

I have run these test from only one machine. 
No listeners added.
Non-Gui mode.
No assertions in my scripts.
Heap size: 8GB

So, I don't understand why my JMeter Results and ethereal traces differ from each other. I've also tried with Synchronizing Timer to achieve this scenario.

Since 4000 Threads is too heavy, maybe I have to test this in Distributed mode. I've also tried with distributed mode (1 master, 2 slaves). Maybe my script is wrong.

Is it possible to see in the ethereal trace that my 4000 requests hit the server in 1 second?

What will be the JMeter script to achieve this scenario in distributed mode?

解决方案

How about starting with whether the server is configured correctly to avoid such load. Requests can be of any type. If they are of static requests then work to ensure that the absolute minimum number of these hit your origin server due to caching policies or architecture, such as

  • If you have returning users and no CDN, make sure your cache policy is storing at the client, expiring with your build schedule. This avoids repeat requests from returning visitors
  • If you have no returning users and no CDN, make sure that your cache policy is set to at least 120% of the maximum page to page delay visible in your logs for a given user set
  • If you have a CDN, make sure all static request headers, 301 & 404 headers are set to allow your CDN to cache your requests to expire with your new build push schedule.
  • If you do not have a CDN, consider a model where you place all static resources on a dedicated server where everything on that server is marked for caching at the client at a high level. You can also front that one server with varnish or squid as a caching proxy to take the load

Utlimately I would suspect a design issue at play with this high a consistent request level. 4000 requests per second becomes 14,400,000 requests per hour and 345,600,000 per 24 hour period.

On a process basis, I would also suggest a minimum of three load generators: Two for primary load and one for a control virtual user of a single virtual user|thread for your business process. In your current model for all on one load generator you have no control element to determine the overhead imposed by the potential overload of your load generator. The use of the control element will help you determine if you have a load generator imposed skew in your driving of load. Essentially, you have a resource exhausting which is adding a speed break on your load generator. Go for a deliberate underload philosophy on your load generators. Adding another load generator is cheaper than the expense of political capital when someone attacks your test for lack of a control element and then you need to re-run your test. It is also far less expensive than chasing an engineering ghost which appears as a slow system but which is really an overloaded load generator

这篇关于如何在1秒钟内发送4000多个请求?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆