如何解决Nginx-连接到上游客户端时没有实时上游? [英] How to solve nginx - no live upstreams while connecting to upstream client?

查看:158
本文介绍了如何解决Nginx-连接到上游客户端时没有实时上游?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我正在基于在tomcat上运行的grails 3的系统上使用JMeter运行负载测试.每秒发送20k请求后,nginx错误日志中显示连接上游客户端时没有上游上游".我们的应用程序是基于多租户的,因此我需要执行高负载.这是我的nginx配置.

Currently I am running a load test using JMeter on our system build on grails 3 running on tomcat. After sending 20k request per second I got "no live upstreams while connecting to upstream client" in nginx error log. Our application is multi-tenant base so I need to perform high load. Here is my nginx configuration.

worker_processes  16;
worker_rlimit_nofile 262144;
error_log  /var/log/nginx/error.log;

events {
    worker_connections  24576;
    use epoll;
    multi_accept on;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  600;
    keepalive_requests 100000;
    access_log off;
    server_names_hash_max_size  4096;
    underscores_in_headers  on;
    client_max_body_size 8192m;
    log_format vhost '$remote_addr - $remote_user [$time_local] $status "$request" $body_bytes_sent "$http_referer" "$http_user_agent" "http_x_forwarded_for"';

    proxy_connect_timeout      120;
    proxy_send_timeout         120;
    proxy_read_timeout         120;


    gzip  on;
    gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss image application/javascript;
    gzip_min_length  1000;
    gzip_static on;
    gzip_vary on;
    gzip_buffers 16 8k;
    gzip_comp_level 6;
    gzip_proxied any;
    gzip_disable "msie6";

    proxy_intercept_errors on;
    recursive_error_pages on;

    ssl_prefer_server_ciphers On;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:RC4-SHA;
    include /etc/nginx/conf.d/*.conf;
}

如何配置高并发负载?

推荐答案

对我来说,问题出在我的proxy_pass条目上.我有

For me, the issue was with my proxy_pass entry. I had

location / {
        ...
        proxy_pass    http://localhost:5001;
    }

这导致上游请求使用IP4本地IP或IP6本地IP,但是每隔一段时间,它将使用没有端口号的本地DNS,从而导致上游错误,如以下日志所示.

This caused the upstream request to use the IP4 localhost IP or the IP6 localhost IP, but every now and again, it would use the localhost DNS without the port number resulting in the upstream error as seen in the logs below.

[27/Sep/2018:16:23:37 +0100] <request IP> - - - <requested URI>  to: [::1]:5001: GET /api/hc response_status 200
[27/Sep/2018:16:24:37 +0100] <request IP> - - - <requested URI>  to: 127.0.0.1:5001: GET /api/hc response_status 200
[27/Sep/2018:16:25:38 +0100] <request IP> - - - <requested URI>  to: localhost: GET /api/hc response_status 502
[27/Sep/2018:16:26:37 +0100] <request IP> - - - <requested URI>  to: 127.0.0.1:5001: GET /api/hc response_status 200
[27/Sep/2018:16:27:37 +0100] <request IP> - - - <requested URI>  to: [::1]:5001: GET /api/hc response_status 200

如您所见,我的"localhost:"状态为502

As you can see, I get a 502 status for "localhost:"

将proxy_pass更改为127.0.0.1:5001意味着所有请求现在都通过端口使用IP4.

Changing my proxy_pass to 127.0.0.1:5001 means that all requests now use IP4 with a port.

StackOverflow 响应对于发现该问题很有帮助,因为它详细更改了日志格式以使其可能看到问题.

This StackOverflow response was a big help in finding the issue as it detailed changing the log format to make it possible to see the issue.

这篇关于如何解决Nginx-连接到上游客户端时没有实时上游?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆