如何在.NET中对服务的工作负载进行负载平衡 [英] How to load-balance the workload of a service in .NET

查看:96
本文介绍了如何在.NET中对服务的工作负载进行负载平衡的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在考虑使用面向服务的体系结构(SOA)构建应用程序.

I am thinking of building an application using a Service Oriented Architecture (SOA).

这种架构不像微服务解决方案那样复杂和凌乱(我认为),但是我面临着类似的设计问题.想象一下,我有ServiceA类型的服务,可以将工作发送到ServiceB类型的服务.我猜想,如果我使用队列,那么负载平衡将不是问题(因为消费者将从队列中获取他们可以处理的内容).但是队列往往会在代码中产生一些不良的异步性,这需要付出额外的努力才能解决.因此,我更倾向于使用服务之间的HTTP调用,并使用C#的高效且令人赞叹的async/await功能.但这会在共享工作负载和检测饱和或死机的服务方面产生问题.

This architecture is not as complex and messy as a microservices solution (I think), but I am facing similar design problems. Imagine I have services of type ServiceA that send work to services of type ServiceB. I guess, if I use a queue, then load balancing will not be a problem (since consumers will take what they can handle from the queue). But queues tend to generate some bad asynchrony in the code that requires extra effort to fix. So, I was more inclined to use HTTP calls between services, using the efficient and amazing async/await feature of C#. But this generates issues on sharing the workload and detecting services that are saturated or dead.

所以我的问题是:

  1. 是否存在支持某种async/await功能的队列,其功能类似于HTTP调用,该调用在需要的地方返回结果,而不是在无法继续原始执行流程的回调中返回?
  2. 在使用HTTP时,如何在服务之间进行流量平衡负载并检测不适合新分配的节点?我的意思是,我可能可以从头开始自己设计一些东西,但是现在应该有一些标准的方法或库或框架来做到这一点.我在网上找到的最好的是,但是它是为微服务构建的,所以我不确定是否可以可以毫无问题或过度使用它.
  1. Is there a queue that supports some sort of async/await feature and that functions like an HTTP call that returns the result where you need it and not in some callback where you cannot continue your original execution flow?
  2. How do I load-balance the traffic between services and detect nodes that are not suitable for new assignments when using HTTP? I mean, I can probably design something by myself from scratch, but there ought to be some standard way or library or framework to do that by now. The best I found online was this, but it is built for microservices, so I am not sure if I can use it without problems or overkills.

更新: 我现在发现了这个问题,该问题也要求等待队列:基于任务的等待队列 ...还发现了Kubernetes,马拉松等.

Update: I have now discovered this question, that also asks for awaitable queues: awaitable Task based queue ...and also discovered Kubernetes, Marathon, and the like.

推荐答案

关于第一个问题,NServiceBus是.NET的商业框架,它抽象化消息传输并在消息传输之上添加许多功能,其确切功能是您正在寻找.他们实际上称其为"回调",用法如下:

Regarding your first question, NServiceBus, which is a commercial framework for .NET that abstracts message transports and adds many features on top of them, has the exact feature that you are looking for. They actually call it "callbacks" and the usage is as follows:

假设您有一条要发送到后端服务的消息和您期望返回的响应,则可以在ServiceA中完成:

Assuming you have a Message to send to a backend service and a Response that you expect back, you would do, in ServiceA:

var message = new Message();
var response = await endpoint.Request<ResponseMessage>(message);
log.Info($"Callback received with response:{response.Result}");

端点是一个NServiceBus工件,可让您发送消息和接收消息.

Where endpoint is an NServiceBus artifact that allows you to send messages and receive messages.

此简单语法的作用是将Message放入队列中,并等待(异步),直到该消息已由后端服务处理并得到答复为止.响应是队列中响应类型的消息.

What this simple syntax will do is put Message in a queue and wait (asynchronously) until the message has been handled by a backend service and it has replied to it. The response is a message of type Response in a queue.

在ServiceB中,您可以这样做:

In ServiceB, you would do:

public class Handler : IHandleMessages<Message>
{
  public Task Handle(Message message, IMessageHandlerContext context)
  {
    var responseMessage = new ResponseMessage
    {
        Result = "TheResult"
    };
    return context.Reply(responseMessage);
  }
}

这使您可以有多个ServiceA节点将消息发送到多个ServiceB节点(与单个队列中的使用者竞争). NServiceBus负责将每个给定消息的响应路由到正确的ServerA.

This allows you to have multiple ServiceA nodes sending messages to multiple ServiceB nodes (competing consumers on a single queueu). NServiceBus takes care of routing the response to the right ServerA for every given message.

请注意,这样做的缺点是,如果ServerA在等待响应时发生故障,那么您将永远不会收到响应.因此,在大多数情况下不建议使用此模式.

Note that this has the disadvantage that if ServerA goes down while waiting a response, you'll never receive the response. For this reason, this pattern is not recommended for most scenarios.

关于您的问题2,我想说负载均衡器可以完成这项工作.对于更复杂的情况,您可以查看服务结构.

Regarding your question number 2, I would say a load balancer would do the job. For more complex scenarios, you could look at Service Fabric.

这篇关于如何在.NET中对服务的工作负载进行负载平衡的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆