php curl本地主机发出并发请求时速度很慢 [英] php curl localhost is slow when making concurrent requests

查看:129
本文介绍了php curl本地主机发出并发请求时速度很慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个有趣的问题,我不确定根本原因是什么.我有一台服务器以及两个虚拟主机A和B,它们的端口分别在80和81上运行.我已经在A上编写了一个简单的PHP代码,如下所示:

I have an interesting issue which I am not sure what the root cause is. I have a server and two virtual hosts A and B with ports running on 80 and 81 respectively. I have written a simple PHP code on A which looks like this:

<?php

echo "from A server\n";

还有B上的另一个简单PHP代码:

And another simple PHP code on B:

<?php

echo "B server:\n";

// create curl resource 
$ch = curl_init(); 

// set url 
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php"); 

//return the transfer as a string 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 

// $output contains the output string 
$output = curl_exec($ch); 

// close curl resource to free up system resources 
curl_close($ch);

echo $output;

使用ab发出并发请求时,我得到以下结果:

When making concurrent requests using ab, I get the following results:

ab -n 10 -c 5 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient).....done


Server Software:        nginx/1.10.0
Server Hostname:        192.168.10.173
Server Port:            80

Document Path:          /b.php
Document Length:        26 bytes

Concurrency Level:      5
Time taken for tests:   2.680 seconds
Complete requests:      10
Failed requests:        0
Total transferred:      1720 bytes
HTML transferred:       260 bytes
Requests per second:    3.73 [#/sec] (mean)
Time per request:       1340.197 [ms] (mean)
Time per request:       268.039 [ms] (mean, across all concurrent requests)
Transfer rate:          0.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     2 1339 1408.8   2676    2676
Waiting:        2 1339 1408.6   2676    2676
Total:          3 1340 1408.8   2676    2677

Percentage of the requests served within a certain time (ms)
  50%   2676
  66%   2676
  75%   2676
  80%   2676
  90%   2677
  95%   2677
  98%   2677
  99%   2677
 100%   2677 (longest request)

但是发出1个并发级别1的请求非常快:

But making 1000 requests with concurrency level 1 is extremely fast:

$ ab -n 1000 -c 1 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.10.0
Server Hostname:        192.168.10.173
Server Port:            80

Document Path:          /b.php
Document Length:        26 bytes

Concurrency Level:      1
Time taken for tests:   1.659 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      172000 bytes
HTML transferred:       26000 bytes
Requests per second:    602.86 [#/sec] (mean)
Time per request:       1.659 [ms] (mean)
Time per request:       1.659 [ms] (mean, across all concurrent requests)
Transfer rate:          101.26 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     1    1  10.3      1     201
Waiting:        1    1  10.3      1     201
Total:          1    2  10.3      1     201

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      1
  99%      2
 100%    201 (longest request)

谁能解释为什么会这样?我真的很想知道根本原因.这是卷曲的问题吗?由于并发性仅为5,所以这感觉上不像是网络瓶颈或打开文件的问题.顺便说一句,我也尝试使用guzzlehttp进行相同的操作,但是结果是相同的.我在笔记本电脑上使用ab,并且服务器位于同一局域网中.另外,它肯定与网络带宽无关,因为主机A和B之间的请求是在本地主机完成的.

Can anyone explain why this happened? I really want to know the root cause. Is it an issue of curl? It doesn't feel like an network bottleneck or open file issue since the concurrency is just 5. By the way, I also try the same thing with guzzlehttp, but the result is the same. I use ab on my laptop and the server is in the same local network. Plus, it certainly has nothing to do with network bandwidth because requests between host A and B are done at localhost.

我修改了代码,以使测试更加灵活:

I have modified the code so that testing is more flexible:

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;

$opt = 1;
$url = 'http://localhost:81/a.php';

switch ($opt) {
    case 1:
        // create curl resource
        $ch = curl_init();

        // set url
        curl_setopt($ch, CURLOPT_URL, $url);

        //return the transfer as a string
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

        // $output contains the output string
        $output = curl_exec($ch);

        curl_close($ch);

        echo $output;
        break;
    case 2:
        $client = new Client();
        $response = $client->request('GET', $url);
        echo $response->getBody();
        break;
    case 3:
        echo file_get_contents($url);
        break;
    default:
        echo "no opt";
}

echo "app server:\n";

我尝试了file_get_contents,但是切换到file_get_contents时并没有明显的区别.当并发为1时,所有方法都是好的.但是当并发增加时,它们都开始降级.

I try the file_get_contents, but there is no obvious differences when switching to file_get_contents. When the concurrency is 1, all methods are good. But they all start downgrading when concurrency increases.

我认为我发现了与此问题有关的东西,所以我只发布了另一个问题并发curl无法解析主机.这可能是根本原因,但我还没有任何答案.

I think I find something related to this issue so I just post another question concurrent curl could not resolve host. This might be the root cause but I don't have any answer yet.

尝试了很长时间之后,我认为这肯定与名称解析有关.这是可以在并发级别500执行的php脚本

After trying for so long, I think this is definitely related to name resolving. And here is the php script that can execute at concurrent level 500

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;

$opt = 1;
$url = 'http://localhost:81/a.php';

switch ($opt) {
    case 1:
        // create curl resource
        $ch = curl_init();

        // set url
        curl_setopt($ch, CURLOPT_URL, $url);

        //return the transfer as a string
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

        curl_setopt($ch, CURLOPT_PROXY, 'localhost');

        // $output contains the output string
        $output = curl_exec($ch);

        curl_close($ch);

        echo $output;
        break;
    case 2:
        $client = new Client();
        $response = $client->request('GET', $url, ['proxy' => 'localhost']);
        echo $response->getBody();
        break;
    case 3:
        echo file_get_contents($url);
        break;
    default:
        echo "no opt";
}

echo "app server:\n";

真正重要的是curl_setopt($ch, CURLOPT_PROXY, 'localhost');$response = $client->request('GET', $url, ['proxy' => 'localhost']);.它告诉curl使用本地主机作为代理.

What really matters are curl_setopt($ch, CURLOPT_PROXY, 'localhost'); and $response = $client->request('GET', $url, ['proxy' => 'localhost']);. It tells curl to use localhost as proxy.

这是AB测试的结果

ab -n 1000 -c 500 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.10.0
Server Hostname:        192.168.10.173
Server Port:            80

Document Path:          /b.php
Document Length:        182 bytes

Concurrency Level:      500
Time taken for tests:   0.251 seconds
Complete requests:      1000
Failed requests:        184
   (Connect: 0, Receive: 0, Length: 184, Exceptions: 0)
Non-2xx responses:      816
Total transferred:      308960 bytes
HTML transferred:       150720 bytes
Requests per second:    3985.59 [#/sec] (mean)
Time per request:       125.452 [ms] (mean)
Time per request:       0.251 [ms] (mean, across all concurrent requests)
Transfer rate:          1202.53 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    6   4.9      5      14
Processing:     9   38  42.8     22     212
Waiting:        8   38  42.9     22     212
Total:         11   44  44.4     31     214

Percentage of the requests served within a certain time (ms)
  50%     31
  66%     37
  75%     37
  80%     38
  90%    122
  95%    135
  98%    207
  99%    211
 100%    214 (longest request)

但是,当不使用localhost作为代理时,为什么在并发级别5名称解析失败?

But still why name resolving failed at concurrency level 5 when not using localhost as proxy?

虚拟主机设置非常简单干净,几乎所有内容都处于默认配置.我不在此服务器上使用iptables,也没有配置任何特殊设置.

The virtual host setting is very simple and clean, and almost everything is in default configuration. I do not use iptables on this server, neither do I config anything special.

server {
    listen 81 default_server;
    listen [::]:81 default_server;

    root /var/www/html;

    index index.html index.htm index.nginx-debian.html;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
    }
}


找到一些有趣的东西!如果您在第一个测试后约3秒钟内进行了另一项Ab测试.第二个Ab测试相当快.


Find something interesting! If you do another ab test right after the first one in about 3 seconds. The second ab test pretty quick.

不使用localhost作为代理

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only.

使用localhost作为代理

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.

我认为这仍然意味着问题是名称解析.但是为什么呢?

I think it still means that the issue is name resolving. But Why?

假设:nginx没有监听localhost:81

Assumption: nginx is not listening to localhost:81

我尝试将listen 127.0.0.1:81;添加到nginx,但没有效果.

I tried adding listen 127.0.0.1:81; to nginx, and it shows no effect.

发现自己在使用curl代理时犯了一些错误,不起作用!稍后再更新其他详细信息.

Find myself making some mistakes about using curl proxy, that does not work! Update other details later.

已解决,与代理无关,或其他任何内容.根本原因是php-fpm的www.conf中的pm.start_servers.

Solved, not related to proxy, or anything. The root cause is pm.start_servers in php-fpm's www.conf.

推荐答案

好吧,经过许多天的尝试来解决这个问题,我终于找到了原因.而且这不是名称解析.我不敢相信要花这么多天才能找到根本原因,即php-fpm的www.conf中的pm.start_servers数量.最初,我将pm.start_servers的数量设置为3,这就是为什么对 ab 测试总是在并发级别3 之后变得更差.虽然php-cli没有限制php进程数量的问题,但是php-cli始终表现出色.将pm.start_servers增加到5后,ab测试结果与php-cli一样快.如果这是您的php-fpm速度缓慢的原因,则还应该考虑更改pm.min_spare_serverspm.max_spare_serverspm.max_children以及其他相关内容的数量.

Ok, after so many days of trying to solve this issue, I finally find out why. And It's not name resolving. I can't believe that it takes so many days to track down the root cause which is the number of pm.start_servers in php-fpm's www.conf. Initially, I set the number of pm.start_servers to 3 which is why the ab test to localhost always gets worse after concurrency level 3. While php-cli has no issue of limited number of php process, thus, php-cli always performs great. After increaing pm.start_servers to 5, the ab test result is as fast as php-cli. If this is the reason why your php-fpm is slow, you should also think about changing the number of pm.min_spare_servers, pm.max_spare_servers, pm.max_children and anything related.

这篇关于php curl本地主机发出并发请求时速度很慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆