nginx慢静电文件服务(慢于节点?) [英] Nginx slow static file serving (slower than node?)
问题描述
我有一个Node.js应用服务器,它位于Nginx配置后面,一直运行良好。我预计负载会有所增加,我想我可以通过设置另一个nginx来在node.js应用服务器上提供静电文件。因此,基本上我已经在Nginx&;Node.js前面设置了Nginx反向代理。
当我重新加载nginx并让它开始服务Nginx
<;->Nginx
路由上的请求(Nginx
)时,我注意到速度明显降低。Nginx
<;->Node.js
大约3秒的事情没有Nginx
<;->Nginx
~15秒!
我是Nginx的新手,在这方面花了大半天的时间,最后我决定发帖寻求一些社区帮助。谢谢!
面向nginx的webnginx.conf
:
http {
# Main settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_header_timeout 1m;
client_body_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
client_max_body_size 256m;
large_client_header_buffers 4 8k;
send_timeout 30;
keepalive_timeout 60 60;
reset_timedout_connection on;
server_tokens off;
server_name_in_redirect off;
server_names_hash_max_size 512;
server_names_hash_bucket_size 512;
# Log format
log_format main '$remote_addr - $remote_user [$time_local] $request '
'"$status" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format bytes '$body_bytes_sent';
access_log /var/log/nginx/access.log main;
# Mime settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Compression
gzip on;
gzip_comp_level 9;
gzip_min_length 512;
gzip_buffers 8 64k;
gzip_types text/plain text/css text/javascript
application/x-javascript application/javascript;
gzip_proxied any;
# Proxy settings
#proxy_redirect of;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Set-Cookie;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
real_ip_header CF-Connecting-IP;
# SSL PCI Compliance
# - removed for brevity
# Error pages
# - removed for brevity
# Cache
proxy_cache_path /var/cache/nginx levels=2 keys_zone=cache:10m inactive=60m max_size=512m;
proxy_cache_key "$host$request_uri $cookie_user";
proxy_temp_path /var/cache/nginx/temp;
proxy_ignore_headers Expires Cache-Control;
proxy_cache_use_stale error timeout invalid_header http_502;
proxy_cache_valid any 3d;
proxy_http_version 1.1; # recommended with keepalive connections
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_cookie $no_cache {
default 0;
~SESS 1;
~wordpress_logged_in 1;
}
upstream backend {
# my 'backend' server IP address (local network)
server xx.xxx.xxx.xx:80;
}
# Wildcard include
include /etc/nginx/conf.d/*.conf;
}
面向nginx的网页Server
挡路将静电文件转发到后面的nginx(在另一个盒子上):
server {
listen 80 default;
access_log /var/log/nginx/nginx.log main;
# pass static assets on to the app server nginx on port 80
location ~* (/min/|/audio/|/fonts/|/images/|/js/|/styles/|/templates/|/test/|/publicfile/) {
proxy_pass http://backend;
}
}
最后是"后端"服务器:
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
sendfile_max_chunk 32;
# server_tokens off;
# server_names_hash_bucket_size 64;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
server {
root /home/admin/app/.tmp/public;
listen 80 default;
access_log /var/log/nginx/app-static-assets.log;
location /publicfile {
alias /home/admin/APP-UPLOADS;
}
}
}
推荐答案
@keenanLawrence在上面的注释中提到,sendfile_max_chunk
指令。
将sendfile_max_chunk
设置为512k
后,我看到我的静电文件(从磁盘)从nginx传递的速度有了显著提高。
我在8k
、32k
、128k
、&;Finally512k
中进行了试验,根据所传递的内容、可用的线程和&;服务器请求负载,每个服务器在最佳chunk size
上的配置似乎有所不同。
我还注意到,当我将worker_processes auto;
更改为worker_processes 2;
时,性能又有了显著提高,从在每个CPU上使用worker_process
变为只使用2
。在我的例子中,这样做效率更高,因为我的Node.js应用程序服务器也在同一台计算机上运行,并且它们也在CPU上执行操作。
这篇关于nginx慢静电文件服务(慢于节点?)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!