水平缩放:在服务器之间路由用户生成的子域 [英] Horizontal scaling: routing user-generated subdomains between servers
问题描述
目前,每个用户子域都相同virtualhost,其中单个PHP前端控制器根据主机名显示适当的内容。 * .mydomain.com的单个通配符DNS记录指向当前的服务器。
我的最佳选项是将不同的用户子域名路由到不同的服务器?
我的想法:
- 每个服务器都有一个新的顶级域名。 user.s1.mydomain.com,user.s2.mydomain.com等(不正确和泄漏信息)
- 运行我自己的DNS服务器以在服务器之间路由用户(额外的故障点,不熟悉的技术)
- 一个中央前端控制器/平衡器,将每个请求反向代理到相应的服务器(额外的故障点,可能有限的连接)
在扩展应用程序的那一点上,我会用中央前端负载平衡器。 Nginx应该处理由一个服务器动态提供的任何负载。我们将nginx作为六个动态服务器和一个静态内容服务器的前端,并且在nginx上没有任何瓶颈。
在您的比例尺度上,设置nginx可以处理所有静态内容本身,并根据需要将代理动态内容反向到多个框。简单代理通行证的设置接近:
upstream upstream_regular_backend {
fair;
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $ host;
proxy_set_header X-Real-IP $ remote_addr;
location / {
proxy_pass http:// upstream_regular_backend;
}
}
为了提供静态内容并传回所有其余内容,像:
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $ host;
proxy_set_header X-Real-IP $ remote_addr;
index index.php;
root / some / dir /:
位置〜\.php {
proxy_pass http:// upstream_regular_backend;
}
}
当然,如果你不使用PHP,配置相应。
在上游定义中,公平将根据响应时间负载平衡后端。对于缓存动机,您可能需要使用ip_hash相反,因为它会将客户端的请求永远放在同一台服务器上。
我们的设置有一些进一步的下滑。我们有nginx负载均衡器代理一个清漆缓存,它反过来代理动态内容服务器。
如果您担心nginx是单点故障,请将辅助服务器设置为假设前端IP,以防万一失败。
I maintain a web application that is outgrowing a single VPS. The architecture consists of a large number of small users, each with their own subdomain. Users do not interact. Load means i have to move some users, and all new users, to another installation of the web application on a separate server.
Currently, every user subdomain falls to the same virtualhost, where a single PHP front controller displays the appropriate content based on the hostname. A single wildcard DNS record for *.mydomain.com points to the current server.
What is my best option for routing different user subdomains to different servers?
My thoughts:
- A new top-level domain for every server. user.s1.mydomain.com, user.s2.mydomain.com and so on (inelegant and leaks information)
- Run my own DNS server to route users between servers (extra point of failure, unfamiliar technology)
- A central front controller / balancer that reverse-proxies every request to the appropriate server (extra point of failure, potentially limited connections)
At that point in the scaling-out of the application, I'd go with a central front load balancer. Nginx should handle any load that is being served dynamically by one single server. We have nginx as a front end for six dynamic servers and one static-content server, and there are no bottlenecks in sight on nginx.
At your scale-point, setup nginx to handle all static content itself, and reverse proxy dynamic content to as many boxes as needed. The setup for simple proxy pass is close to:
upstream upstream_regular_backend {
fair;
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://upstream_regular_backend;
}
}
For serving static content and passing back all the rest, something like:
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
index index.php;
root /some/dir/:
location ~ \.php {
proxy_pass http://upstream_regular_backend;
}
}
Naturally, if you are not using PHP, tweak the configuration accordingly.
On the upstream definition, "fair;" will load-balance backends based on response-time. For caching motives, you may want to use "ip_hash;" instead, as it will land requests from a client always on the same server.
Our setup is a bit further down the road. We have nginx load-balancers proxying a varnish cache, which in turn proxies the dynamic content servers.
If you are worried about nginx being a single-point-of-failure, setup a secondary server ready to assume the frontend's IP in case it fails.
这篇关于水平缩放:在服务器之间路由用户生成的子域的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!