Node.js响应速度和Nginx [英] nodejs response speed and nginx

查看:79
本文介绍了Node.js响应速度和Nginx的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

刚刚开始测试nodejs,并希望获得一些帮助以了解以下行为:

Just started testing nodejs, and wanted to get some help in understanding following behavior:

var http = require('http');
http.createServer(function(req, res){
    res.writeHeader(200, {'Content-Type': 'text/plain'});
    res.end('foo');
}).listen(1001, '0.0.0.0');

示例2:

var http = require('http');
http.createServer(function(req, res){
    res.writeHeader(200, {'Content-Type': 'text/plain'});
    res.write('foo');
    res.end('bar');
}).listen(1001, '0.0.0.0');

在Chrome中测试响应时间时:

When testing response time in Chrome:

示例#1-6-10毫秒
示例#2-200-220ms

example #1 - 6-10ms
example #2 - 200-220ms

但是,如果通过nginx proxy_pass测试两个示例

But, if test both examples through nginx proxy_pass

server{
    listen 1011;
    location / {
        proxy_pass http://127.0.0.1:1001;
    }
}

我明白了:

i get this:

示例#1-4-8毫秒
例子#2-4-8ms

example #1 - 4-8ms
example #2 - 4-8ms

我不是nodejs或nginx的专家,问有人可以解释吗?

I am not an expert on either nodejs or nginx, and asking if someone can explain this?

nodejs-v.0.8.1
nginx-v.1.2.2

nodejs - v.0.8.1
nginx - v.1.2.2

多亏了河马,我在有和没有nginx的服务器上使用ab进行了测试, 并得到相反的结果.

thanks to Hippo, i made test with ab on my server with and without nginx, and got opposite results.

还添加到了关闭nginx config proxy_cache

also added to nginx config proxy_cache off

server{
    listen 1011;
    location / {
        proxy_pass http://127.0.0.1:1001;
        proxy_cache off;
    }
}

示例#1直接:

ab -n 1000 -c 50 http://127.0.0.1:1001/

ab -n 1000 -c 50 http:// 127.0.0.1:1001/



    Server Software:        
    Server Hostname:        127.0.0.1
    Server Port:            1001

    Document Path:          /
    Document Length:        65 bytes

    Concurrency Level:      50
    Time taken for tests:   1.018 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Total transferred:      166000 bytes
    HTML transferred:       65000 bytes
    Requests per second:    981.96 [#/sec] (mean)
    Time per request:       50.919 [ms] (mean)
    Time per request:       1.018 [ms] (mean, across all concurrent requests)
    Transfer rate:          159.18 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.6      0       3
    Processing:     0   50  44.9     19     183
    Waiting:        0   49  44.8     17     183
    Total:          1   50  44.7     19     183

示例#1 nginx:

ab -n 1000 -c 50 http://127.0.0.1:1011/

ab -n 1000 -c 50 http:// 127.0.0.1:1011/



    Server Software:        nginx/1.2.2
    Server Hostname:        127.0.0.1
    Server Port:            1011

    Document Path:          /
    Document Length:        65 bytes

    Concurrency Level:      50
    Time taken for tests:   1.609 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Total transferred:      187000 bytes
    HTML transferred:       65000 bytes
    Requests per second:    621.40 [#/sec] (mean)
    Time per request:       80.463 [ms] (mean)
    Time per request:       1.609 [ms] (mean, across all concurrent requests)
    Transfer rate:          113.48 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.6      0       3
    Processing:     2   77  44.9     96     288
    Waiting:        2   77  44.8     96     288
    Total:          3   78  44.7     96     288

直接示例2:

ab -n 1000 -c 50 http://127.0.0.1:1001/

ab -n 1000 -c 50 http:// 127.0.0.1:1001/



    Server Software:        
    Server Hostname:        127.0.0.1
    Server Port:            1001

    Document Path:          /
    Document Length:        76 bytes

    Concurrency Level:      50
    Time taken for tests:   1.257 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Total transferred:      177000 bytes
    HTML transferred:       76000 bytes
    Requests per second:    795.47 [#/sec] (mean)
    Time per request:       62.856 [ms] (mean)
    Time per request:       1.257 [ms] (mean, across all concurrent requests)
    Transfer rate:          137.50 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.3      0       2
    Processing:     0   60  47.8     88     193
    Waiting:        0   60  47.8     87     193
    Total:          0   61  47.7     88     193

示例#2 nginx:

ab -n 1000 -c 50 http://127.0.0.1:1011/

ab -n 1000 -c 50 http:// 127.0.0.1:1011/



    Server Software:        nginx/1.2.2
    Server Hostname:        127.0.0.1
    Server Port:            1011

    Document Path:          /
    Document Length:        76 bytes

    Concurrency Level:      50
    Time taken for tests:   1.754 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Total transferred:      198000 bytes
    HTML transferred:       76000 bytes
    Requests per second:    570.03 [#/sec] (mean)
    Time per request:       87.715 [ms] (mean)
    Time per request:       1.754 [ms] (mean, across all concurrent requests)
    Transfer rate:          110.22 [Kbytes/sec] received

    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.4      0       2
    Processing:     1   87  42.1     98     222
    Waiting:        1   86  42.3     98     222
    Total:          1   87  42.0     98     222


现在的结果看起来更具逻辑性,但是调用res.write()

Now results looks more logic, but still there is a strange delay when calling res.write()

我想这(肯定看起来是一个愚蠢的问题),但是在使用此服务器配置(Centos 6)和此具体服务器(vps)的浏览器中,我的响应时间仍然存在巨大差异.

I guess it was (sure looks like) a stupid question, but i still get huge difference in response time in browser with this server configuration (Centos 6) and this concrete server (vps).

在我的家用计算机(Ubuntu 12)上,但是使用从本地主机进行测试的较旧版本,一切正常.

On my home computer (Ubuntu 12) but with older versions testing from localhost everything works fine.


推荐答案

浏览http.js后发现,情况#1在nodejs本身具有特殊处理,我猜这是一种快捷方式优化.

Peeking into http.js reveals that case #1 has special handling in nodejs itself, some kind of a shortcut optimization I guess.

var hot = this._headerSent === false &&
            typeof(data) === 'string' &&
            data.length > 0 &&
            this.output.length === 0 &&
            this.connection &&
            this.connection.writable &&
            this.connection._httpMessage === this;

      if (hot) {
        // Hot path. They're doing
        //   res.writeHead();
        //   res.end(blah);
        // HACKY.

        if (this.chunkedEncoding) {
          var l = Buffer.byteLength(data, encoding).toString(16);
          ret = this.connection.write(this._header + l + CRLF +
                                      data + '\r\n0\r\n' +
                                      this._trailer + '\r\n', encoding);
        } else {
          ret = this.connection.write(this._header + data, encoding);
        }
        this._headerSent = true;

      } else if (data) {
        // Normal body write.
        ret = this.write(data, encoding);
      }

      if (!hot) {
        if (this.chunkedEncoding) {
          ret = this._send('0\r\n' + this._trailer + '\r\n'); // Last chunk.
        } else {
          // Force a flush, HACK.
          ret = this._send('');
        }
      }

      this.finished = true;

这篇关于Node.js响应速度和Nginx的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆