是否需要调整服务结构负载平衡器? [英] service fabric load-balancer tweak needed or not?

查看:75
本文介绍了是否需要调整服务结构负载平衡器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有关服务结构的大多数示例表明,部署后,群集端点的出现就像服务清单eg: <cluster-url>:port/api/home

Most examples about service fabric shows that after deployments, cluster endpoints appear magically just as given in service-manifest eg: <cluster-url>:port/api/home

论坛上的某些人提到调整负载平衡器以允许访问端口.

Some people on forums mention about tweaking load balancer to allow access to port.

为什么意见不同?哪种方法是正确的? 当我尝试时,我永远无法访问Azure集群中的已部署api/端点(是否调整了负载均衡器).单一框虽然有效.

Why the difference in opinions? Which way is correct? When I tried I was never able to access a deployed api/endpoint in azure cluster(load balancer tweaked or not). OneBox worked though.

推荐答案

大多数人在构建SF应用程序时会忘记的主要细节是,它们正在构建分布式应用程序,因此,当您在集群中部署一项服务时,您需要一种方法来找到它,它在某些情况下可以在群集中移动,因此解决方案必须能够反映这些分布.

The main detail most people forget when building SF applications is that they are building distributed applications, when you deploy one service in a cluster, you need to a way to find it, and it can move around the cluster in some cases, so the solution must be able to reflect these distributions.

它在本地工作,因为您只有一个终结点(localhost(127.0.0.1)>服务),并且您将始终在其中找到应用程序.

It works locally because you have a single endpoint (localhost(127.0.0.1)>Service) and you will always find your application there.

在SF上,您命中了一个域,该域将映射到负载均衡器,该域将映射到一组计算机,并且其中一台计算机可能在其中运行了应用程序 (域> LB IP>节点>服务).

On SF, you hit a domain that will map to a load balancer, that will map to a set of machines, and one of the machines might have you application running on it (domain>LB IP > Nodes > Service).

您需要了解的第一件事是:

The first thing you need to know is:

  • 您的服务可能不会在负载均衡器后面的所有节点(计算机)上运行,当负载均衡器向该节点发送请求时,如果失败,则LB不会在另一个节点上重试,并且转发这些节点请求到随机节点,并且在大多数情况下,它会将打开的连接保持在同一台计算机上.如果您需要在所有节点上运行服务,请将实例计数设置为-1,并且可能只需打开LB上的端口即可看到它正常工作.

  • Your service might not run on all nodes(machines) behind a load balancer, when the load balancer send a request to that node, if it fails, the LB does not retry on another node, and it forward these requests to random nodes, and in most cases it stick open connections to the same machine. If you need your service running on all nodes, set the instance count to -1 and you might see it working just by opening the ports on LB.

每个NodeType前面都有一个负载均衡器,因此,请始终在服务上设置放置约束,以免它在其他未暴露给外部的NodeType上启动

Each NodeType has one load balancer in front of it, so, always set a placement constraint on the service to avoid it start on other NodeType not exposed externally

应用程序打开的每个端口的确会在节点上打开,如果需要外部访问,则必须在LoadBalancer中手动打开它,或者通过脚本打开,SF分配给您的服务的端口应该是:内部管理,以避免在同一节点上运行的服务之间的端口冲突,SF不会打开LB中的端口以供外部访问.

Every port opened by your application does open on a node basis, if you need external access, it must be opened in the LoadBalancer manually, or via script, the ports assigned by SF to your service are meant to be managed internally to avoid port conflicts between services running on same node, SF does not open the ports in the LB for external access.

有很多公开这些服务的方法,您也可以尝试:

There are many approaches to expose these service, you also could try:

  • 用户使用一个ReverseProxy,例如一个 bulti- 中,该代理会将呼叫代理到群集中已执行的服务,而无论它们在何处.
  • 将NGINX用作API网关或反向代理,并将其配置为仅调用特定服务,在这种情况下,您需要向其提供服务地址,因此您需要在服务启动或停止时刷新列表./li>
  • 使用 Azure API管理公开在SF上托管的API
  • User a ReverseProxy, like the one bulti-in that will proxy the calls to your services apread in the cluster, does not matter where they are.
  • Use NGINX as an API Gateway or Reverse Proxy, and configure it to call only specific services, in this case you need to provide it with the services address, so you would need to refresh the list when services start or stop.
  • Use Azure API Management to Expose APIs hosted on SF

这篇关于是否需要调整服务结构负载平衡器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆