grpc C ++中的异步模型 [英] Asynchronous model in grpc c++

查看:201
本文介绍了grpc C ++中的异步模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的团队正在设计一种具有微服务体系结构的可伸缩解决方案,并计划将gRPC用作层之间的传输通信.我们已经决定使用异步grpc模型.该示例的设计( greeter_async_server.cc )如果我缩放RPC方法的数量,提供的方法似乎不可行,因为那样我就必须为每个RPC方法创建一个新类,并像这样在HandleRpcs()中创建它们的对象. Pastebin (简短示例代码).

My team is designing a scalable solution with micro-services architecture and planning to use gRPC as the transport communication between layers. And we've decided to use async grpc model. The design that example(greeter_async_server.cc) provides doesn't seem viable if I scale the number of RPC methods, because then I'll have to create a new class for every RPC method, and create their objects in HandleRpcs() like this. Pastebin (Short example code).

   void HandleRpcs() {
            new CallDataForRPC1(&service_, cq_.get());
            new CallDataForRPC2(&service_, cq_.get());
            new CallDataForRPC3(&service, cq_.get());
            // so on...
    }

它将被硬编码,所有的灵活性都将丢失.

It'll be hard-coded, all the flexibility will be lost.

我大约有300-400RPC方法要实现,而当我不得不每秒处理超过100K RPC请求时,拥有300-400类将很麻烦且效率低下.非常糟糕的设计.我无法承受在每个单个请求上以这种方式创建对象的开销.有人可以为我提供解决方法吗?异步grpc c++不能像其同步伴侣一样简单吗?

I've around 300-400RPC methods to implement and having 300-400 classes will be cumbersome and inefficient when I'll have to handle more than 100K RPC requests/sec and this solution is a very bad design. I can't bear the overhead of creation of objects this way on every single request. Can somebody kindly provide me a workaround for this. Can async grpc c++ not be simple like its sync companion?

编辑:为了使情况更加清楚,对于那些可能难以理解该异步示例流程的人,请写我到目前为止所了解的内容如果我在某个地方出错,就让我正确.

Edit: In favour of making the situation more clear, and for those who might be struggling to grasp the flow of this async example, I'm writing what I've understood so far, please make me correct if wrong somewhere.

在异步grpc中,每次我们必须将唯一标记与完成队列绑定在一起时,以便我们轮询时,当特定的RPC将被客户端击中时,服务器可以将其返回给我们,然后我们推断从返回的有关呼叫类型的唯一标签中删除.

In async grpc, every time we have to bind a unique-tag with the completion-queue so that when we poll, the server can give it back to us when the particular RPC will be hit by the client, and we infer from the returned unique-tag about the type of the call.

service_->RequestRPC2(&ctx_, &request_, &responder_, cq_, cq_,this);在这里,我们将当前对象的地址用作唯一标记.这就像在完成队列上注册RPC调用一样.然后,我们在HandleRPCs()中进行轮询,以查看客户端是否命中RPC,如果是,则cq_->Next(&tag, &OK)将填充该标记.轮询代码段:

service_->RequestRPC2(&ctx_, &request_, &responder_, cq_, cq_,this); Here we're using the address of the current object as the unique-tag. This is like registering for our RPC call on the completion queue. Then we poll down in HandleRPCs() to see if the client hits the RPC, if so then cq_->Next(&tag, &OK) will fill the tag. The polling code snippet:

while (true) {
          GPR_ASSERT(cq_->Next(&tag, &ok));
          GPR_ASSERT(ok);
          static_cast<CallData*>(tag)->Proceed();
        }

因为,我们注册到队列中的唯一标签是CallData对象的地址,因此我们可以调用Proceed().对于一个在Proceed()中具有逻辑的RPC来说,这很好.但是每次使用更多RPC时,我们都会将其全部包含在CallData中,然后在轮询时,我们将调用唯一的Proceed(),该Proceed()包含逻辑(例如)RPC1(postgres调用),RPC2(mongodb调用) ), .. 很快.这就像在一个函数中编写所有程序一样.因此,为避免这种情况,我将GenericCallData类与virtual void Proceed()一起使用,并派生出它的派生类,每个RPC一个类,并在自己的Proceed()中包含自己的逻辑.这是一个可行的解决方案,但我想避免编写许多类.

Since, the unique-tag that we registered into the queue was the address of the CallData object so we're able to call Proceed(). This was fine for one RPC with its logic inside Proceed(). But with more RPCs each time we'll have all of them inside the CallData, then on polling, we'll be calling the only one Proceed() which will contain logic to (say) RPC1(postgres calls), RPC2(mongodb calls), .. so on. This is like writing all my program inside one function. So, to avoid this, I used a GenericCallData class with the virtual void Proceed() and made derived classes out of it, one class per RPC with their own logic inside their own Proceed(). This is a working solution but I want to avoid writing many classes.

我尝试的另一种解决方案是将所有RPC-function-logics排除在proceed()之外,并保留在其自身的功能中,并维护全局std::map<long, std::function</*some params*/>>.因此,每当我将带有唯一标记的RPC注册到队列中时,我都会存储其对应的逻辑功能(我肯定会将其硬编码到语句中并绑定所有必需的参数),然后将唯一标记作为键.轮询时,当我得到&tag时,我会在地图中为此键进行查找并调用相应的已保存函数.现在,还有一个障碍,我必须在函数逻辑中执行此操作:

Another solution I tried was keeping all RPC-function-logics out of the proceed() and into their own functions and maintaining a global std::map<long, std::function</*some params*/>> . So whenever I register an RPC with unique-tag onto the queue, I store its corresponding logic function (which I'll surely hard code into the statement and bind all the parameters required), then the unique-tag as key. On polling, when I get the &tag I do a lookup in the map for this key and call the corresponding saved function. Now, there's one more hurdle, I'll have to do this inside the function logic:

// pseudo code
void function(reply, responder, context, service)
{
    // register this RPC with another unique tag so to serve new incoming request of the same type on the completion queue
     service_->RequestRPC1(/*params*/, new_unique_id);
    // now again save this new_unique_id and current function into the map, so when tag will be returned we can do lookup
     map.emplace(new_unique_id, function);

    // now you're free to do your logic
    // do your logic
}

您看到的是,代码现在已经扩展到另一个模块中,并且是基于RPC的. 希望它能清除情况. 我想如果有人能以更简单的方式实现这种服务器.

You see this, code has spread into another module now, and it's per RPC based. Hope it clears the situation. I thought if somebody could have implemented this type of server in a more easy way.

推荐答案

到目前为止,这篇文章已经很老了,但是我还没有看到任何答案或示例,因此我将向其他读者展示如何解决.我大约有30个RPC调用,并且正在寻找一种减少添加和删除RPC调用时占用空间的方法.我花了一些迭代才能找到解决它的好方法.

This post is pretty old by now but I have not seen any answer or example regarding this so I will show how I solved it to any other readers. I have around 30 RPC calls and was looking for a way of reducing the footprint when adding and removing RPC calls. It took me some iterations to figure out a good way to solve it.

因此,从我的(g)RPC库获取RPC请求的接口是接收方需要实现的回调接口.界面如下所示:

So my interface for getting RPC requests from my (g)RPC library is a callback interface that the recepiant need to implement. The interface looks like this:

class IRpcRequestHandler
{
public:
    virtual ~IRpcRequestHandler() = default;
    virtual void onZigbeeOpenNetworkRequest(const smarthome::ZigbeeOpenNetworkRequest& req,
                                            smarthome::Response& res) = 0;
    virtual void onZigbeeTouchlinkDeviceRequest(const smarthome::ZigbeeTouchlinkDeviceRequest& req,
                                                smarthome::Response& res) = 0;
    ...
};

以及一些用于在gRPC服务器启动后设置/注册每个RPC方法的代码:

And some code for setting up/register each RPC method after the gRPC server is started:

void ready() 
{
    SETUP_SMARTHOME_CALL("ZigbeeOpenNetwork", // Alias that is used for debug messages
                         smarthome::Command::AsyncService::RequestZigbeeOpenNetwork,  // Generated gRPC service method for async.
                         smarthome::ZigbeeOpenNetworkRequest, // Generated gRPC service request message
                         smarthome::Response, // Generated gRPC service response message
                         IRpcRequestHandler::onZigbeeOpenNetworkRequest); // The callback method to call when request has arrived.

    SETUP_SMARTHOME_CALL("ZigbeeTouchlinkDevice",
                         smarthome::Command::AsyncService::RequestZigbeeTouchlinkDevice,
                         smarthome::ZigbeeTouchlinkDeviceRequest,
                         smarthome::Response,
                         IRpcRequestHandler::onZigbeeTouchlinkDeviceRequest);
    ...
}

这是添加和删除RPC方法时需要关心的全部.

This is all that you need to care about when adding and removing RPC methods.

SETUP_SMARTHOME_CALL是一个自制的宏,如下所示:

The SETUP_SMARTHOME_CALL is a home-cooked macro which looks like this:

#define SETUP_SMARTHOME_CALL(ALIAS, SERVICE, REQ, RES, CALLBACK_FUNC) \
  new ServerCallData<REQ, RES>(                                       \
      ALIAS,                                                          \
      std::bind(&SERVICE,                                             \
                &mCommandService,                                     \
                std::placeholders::_1,                                \
                std::placeholders::_2,                                \
                std::placeholders::_3,                                \
                std::placeholders::_4,                                \
                std::placeholders::_5,                                \
                std::placeholders::_6),                               \
      mCompletionQueue.get(),                                         \
      std::bind(&CALLBACK_FUNC, requestHandler, std::placeholders::_1, std::placeholders::_2))

我认为ServerCallData类看起来像gRPCs示例中的类,但有一些修改. ServerCallData派生自具有抽象函数void proceed(bool ok)的非templete类,用于CompletionQueue :: Next()处理.创建ServerCallData时,它将调用SERVICE方法以在CompletionQueue上注册自己,并在每个第一次的proceed(ok)调用中将克隆自己,这将注册另一个实例.如果有人感兴趣,我也可以发布一些示例代码.

I think the ServerCallData class looks like the one from gRPCs examples with a few modifications. ServerCallData is derived from a non-templete class with an abstract function void proceed(bool ok) for the CompletionQueue::Next() handling. When ServerCallData is created, it will call the SERVICE method to register itself on the CompletionQueue and on every first proceed(ok) call, it will clone itself which will register another instance. I can post some sample code for that as well if someone is interested.

这篇关于grpc C ++中的异步模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆