AWS Route 53-到应用程序负载平衡器不同端​​口的域名路由 [英] AWS Route 53 - Domain name route to different ports of an Application load balancer

查看:165
本文介绍了AWS Route 53-到应用程序负载平衡器不同端​​口的域名路由的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们正在AWS中实现微服务架构。我们有几个EC2实例,这些实例在不同的端口上部署了微服务。我们也有一个面向应用程序负载平衡器的Internet,它可以根据端口路由到不同的服务。

 例如:
xxxx-xx.xx.elb.amazonaws.com:8080/转到微服务1
xxxx-xx.xx.elb.amazonaws.com:8090/转到微服务2

我们需要使用域名而不是ELB,端口也不应通过域名公开。我发现的有关路由53的几乎所有资源都使用别名,该别名执行以下操作:

  xx.xxxx.co.id-> ; xxxx-xx.xx.elb.amazonaws.com或
xx.xxxx.co.id-> 111.111.111.11(静态ip)

1)每个微服务是否需要单独的域? p>

2)如何使用别名将域指向ELB的特定端口?



3)是否可以如果域来自AWS以外的其他提供商,则可以使用此设置。

解决方案

重要更新



由于最初编写此答案,因此Application Load Balancer > ALB引入了根据 Host 传入请求的标头



传入的主机标头现在可用于将请求路由到特定的实例和端口。



此外, ALB引入了SNI支持,允许您将多个TLS(SSL)证书与单个平衡器相关联,并且在协商TLS时,将根据客户端提供的SNI自动选择正确的证书。来自Amazon Certificate Manager的多域和通配符证书也可以与ALB一起使用。



基于这些因素,不需要单独的端口或不同的侦听器-只需分配主机名和/或每个服务的路径前缀,然后将这些模式映射到适当的实例目标组。



原始答案不再准确,但包含在下面。

>





1.)每个微服务是否需要单独的域?


不,这对您没有帮助。 ALB不会解释附加到传入请求的主机名。



在同一域中使用单独的主机名也不会直接实现您的目标。


2。)如何使用别名将域指向ELB的特定端口?


域不指向端口。主机名不指向端口。 DNS仅用于地址解析。


3。)如果域来自AWS以外的其他提供商,是否可以使用此设置


这不是AWS的限制。 DNS根本无法通过这种方式工作。



服务端点不知道指向它的DNS记录。 DNS条目本身严格用于发现可用于访问端点的IP地址。此后,终结点计算机实际上不了解有关DNS的任何信息,也无法通过DNS告诉浏览器使用其他端口。



对于HTTP ,隐式端口是80。对于HTTPS,它是443。除非URL中提供了端口,否则这些端口是 only 可用端口。



但是,在HTTP和HTTPS中,每个请求都带有 Host:标头,该标头由Web浏览器随每个请求发送。这是地址栏中的主机名。



要区分到达设备(例如ELB / ALB)的不同主机名的请求,端点上的设备必须解释



ALB当前不支持此功能。



ALB确实支持基于路径前缀选择端点。因此,microservices.example.com / api / foo可以路由到一组服务,而microservices.example.com/api/bar可以路由到另一组服务。



但是ALB






在我的基础架构中,我们结合使用ELB或ALB,但是负载均衡器后面的实例是不是应用程序。相反,它们是运行HAProxy负载平衡器软件并将请求路由到后端的实例。



重要配置元素的简短示例如下所示:

 前端主服务器
use_backend svc1 if {hdr(Host)-i foo.example.com}
use_backend svc2 if {hdr(Host)-i bar.example.com}

后端svc1
服务器foo-a 192.168.2.24:8080
服务器foo-b 192.168.12.18:8080

后端svc2
....

ELB终止SSL并随机选择一个代理,然后代理检查 Host:标头,然后选择将请求路由到的后端(一组1个或更多实例)。这是ELB和应用程序之间的薄层,它通过检查主机头或请求的任何其他特征来处理请求路由。



这是一个解决方案,






如果您正在寻找一个超乎寻常的框,无服务器,以AWS为中心的解决方案,那么答案实际上是在CloudFront中找到的。是的,它是CDN,但它还有其他一些应用程序,包括作为反向代理。




  • 对于每个服务,请选择一个您网域中要分配给该服务的主机名foo.api.example.com或bar.api.example.com。


  • 为每个服务创建一个CloudFront发行版。


  • 配置每个分发的备用域名,以使用该服务分配的主机名。


  • 将原始域名设置为ELB主机名。


  • 设置原始HTTP端口到ALB上服务的特定端口,例如8090。


  • 配置默认的缓存行为以转发所需的任何标头。如果您不需要CloudFront的缓存功能,请选择转发所有标题。


  • 在路由53中,将foo.api.example.com创建为该特定CloudFront发行版主机名的别名。 ,例如dxxxexample.cloudfront.net。




您的问题已解决。



您看到我在这里做什么了吗?



对于您配置的每个主机名,专用的CloudFront分发都会在标准端口(80/443)上接收请求,并且-根据主机标头匹配的分布情况-CloudFront将请求路由到相同 ELB / ALB主机名,但使用自定义端口号。


We are implementing a micro-services architecture in AWS. We have several EC2 instances which has the micro-services deployed on different ports. We also have an internet facing Application Load Balancer, which routes to different services based on the port.

eg: 
xxxx-xx.xx.elb.amazonaws.com:8080/ go to microservice 1 
xxxx-xx.xx.elb.amazonaws.com:8090/ go to microservice 2

We need to have a domain name instead of the ELB, the port should not be exposed through the domain name as well. Almost all the resources I found regarding route 53, use alias which does the following:

xx.xxxx.co.id -> xxxx-xx.xx.elb.amazonaws.com or
xx.xxxx.co.id -> 111.111.111.11 (static ip)

1) Do we need separate domains for each micro service?

2) How to use alias to point domains to a specific port of the ELB?

3) Is it possible to use this setup if the domains are from another provider other than AWS.

解决方案

Important Update

Since this answer was originally written, Application Load Balancer introduced the capability for ALB to route requests to a specific target group based on the Host header of the incoming request.

The incoming host header can now be used to route requests to specific instances and ports.

Additionally, ALB introduced SNI support, allowing you to associate multiple TLS (SSL) certificates with a single balancer, and the correct certificate will be automatically selected based on the SNI presented by the client when TLS is negotiated. Multi-domain and wildcard certs from Amazon Certificate Manager also work with ALB.

Based on these factors, no separate ports or different listeners are needed -- simply assign hostnames and/or path prefixes for each service, and map those patterns to the appropriate target group of instances.

The original answer is no longer accurate, but is included below.


1.) Do we need separate domains for each micro service?

No, this won't help you. ALB does not interpret the hostname attached to the incoming request.

Separate hostnames in the same domain won't directly accomplish your objective, either.

2.) How to use alias to point domains to a specific port of the ELB?

Domains do not point to ports. Hostnames do not point to ports. DNS is only used for address resolution. This is true everywhere on the Internet.

3.) Is it possible to use this setup if the domains are from another provider other than AWS.

This is not a limitation of AWS. DNS simply does not work this way.

A service endpoint is unaware of the DNS records that point to it. The DNS entry itself is strictly used for discovering an IP address that can be used to access the endpoint. After that, the endpoint does not actually know anything about the DNS, and there is no way to tell the browser, via DNS, to use a different port.

For HTTP, the implicit port is 80. For HTTPS, it is 443. Unless a port is provided in the URL, these are the only usable ports.

However, in HTTP and HTTPS, each request is accompanied by a Host: header, sent by the web browser with each request. This is the hostname in the address bar.

To differentiate between requests for different hostnames arriving at a device (such as ELB/ALB), the device at the endpoint must interpret the incoming host header and route the request to an back-end system providing that service.

ALB does not currently support this capability.

ALB does, however, support choosing endpoints based on a path prefix. So microservices.example.com/api/foo could route to one set of services, while microservices.example.com/api/bar could route to another.

But ALB does not directly support routing by host header.


In my infrastructure, we use a combination of ELB or ALB, but the instances behind the load balancer are not the applications. Instead, they are instances that run HAProxy load balancer software, and route the requests to the backend.

A brief example of the important configuration elements looks like this:

frontend main
  use_backend svc1 if { hdr(Host) -i foo.example.com }
  use_backend svc2 if { hdr(Host) -i bar.example.com }

backend svc1
  server foo-a 192.168.2.24:8080
  server foo-b 192.168.12.18:8080

backend svc2
  ....

The ELB terminates the SSL and selects a proxy at random and the proxy checks the Host: header and selects a backend (a group of 1 or more instances) to which the request will be routed. It is a thin layer between the ELB and the application, which handles the request routing by examining the host header or any other characteristic of the request.

This is one solution, but is a somewhat advanced configuration, depending on your expertise.


If you are looking for an out-of-the-box, serverless, AWS-centric solution, then the answer is actually found in CloudFront. Yes, it's a CDN, but it has several other applications, including as a reverse proxy.

  • For each service, choose a hostname from your domain to assign to that service, foo.api.example.com or bar.api.example.com.

  • For each service, create a CloudFront distribution.

  • Configure the Alternate Domain Name of each distribution to use that service's assigned hostname.

  • Set the Origin Domain Name to the ELB hostname.

  • Set the Origin HTTP Port to the service's specific port on the ALB, e.g. 8090.

  • Configure the default Cache Behavior to forward any headers you need. If you don't need the caching capability of CloudFront, choose Forward All Headers. Also enable forwarding of Query Strings and Cookies if needed.

  • In Route 53, create foo.api.example.com as an Alias to that specific CloudFront distribution's hostname, e.g. dxxxexample.cloudfront.net.

Your problem is solved.

You see what I did there?

For each hostname you configure, a dedicated CloudFront distribution receives the request on the standard ports (80/443) and -- based on which distribution the host header matches -- CloudFront routes the requests to the same ELB/ALB hostname but a custom port number.

这篇关于AWS Route 53-到应用程序负载平衡器不同端​​口的域名路由的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆