gRPC Python thread_pool与max_concurrent_rpcs [英] gRPC Python thread_pool vs max_concurrent_rpcs

查看:81
本文介绍了gRPC Python thread_pool与max_concurrent_rpcs的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

启动Python grpc.server 时,线程池中使用的 maximum_concurrent_rpcs max_workers 之间有什么区别.如果我要 maximum_concurrent_rpcs = 1 ,我是否还应该向线程池提供多个线程?

换句话说,我应该将 maximum_concurrent_rpcs 与我的 max_workers 匹配,还是应该提供比最大并发RPC多的工作器?

  server = grpc.server(thread_pool = futures.ThreadPoolExecutor(max_workers = 1),maximum_concurrent_rpcs = 1,) 

解决方案

如果您的服务器已经在同时处理 maximum_concurrent_rpcs 个请求,但又收到另一个请求,则该请求将立即被拒绝./p>

如果ThreadPoolExecutor的 max_workers 小于 maximum_concurrent_rpcs ,则在所有线程忙于处理请求之后,下一个请求将排队,并在线程完成其处理后处理处理.

我有同样的问题.为了回答这个问题,我调试了 maximum_concurrent_rpcs 会发生什么.调试转到了 virtualenv 中的 py36/lib/python3.6/site-packages/grpc/_server.py .搜索 concurrency_exceeded .最重要的是,如果服务器已经在处理 maximum_concurrent_rpcs 并且另一个请求到达,它将被拒绝:

 #...elif concurrency_exceeded:返回_reject_rpc(rpc_event,cygrpc.StatusCode.resource_exhausted,b'超过了并发RPC限制!'),无#... 

我使用 gRPC Python快速入门示例进行了尝试:

greeter_server.py 中,我修改了 SayHello()方法:

 #...def SayHello(自身,请求,上下文):打印(请求已到达,请稍等...")时间.睡眠(10)返回helloworld_pb2.HelloReply(message ='Hello,%s!'%request.name)#... 

serve()方法:

  def serve():服务器= grpc.server(futures.ThreadPoolExecutor(max_workers = 10),maximum_concurrent_rpcs = 2)#... 

然后,我打开3个终端并手动执行其中的客户端(以最快的速度使用 python greeter_client.py :

正如预期的那样,对于前两个客户端,请求的处理立即开始(可以在服务器的输出中看到),因为有足够的可用线程,但是第三个客户端立即被拒绝(如预期的那样),使用 StatusCode.RESOURCE_EXHAUSTED 已超过并发RPC限制!.

现在要测试没有足够的线程分配给 ThreadPoolExecutor 时会发生什么,我将 max_workers 修改为1:

  server = grpc.server(futures.ThreadPoolExecutor(max_workers = 1),maximum_concurrent_rpcs = 2) 

我与之前大致相同的时间再次运行了3个客户.

结果是第一个立即送达.第二个需要等待10秒钟(送达第一个),然后再送达.第三个被立即拒绝.

When launching a Python grpc.server, what's the difference between maximum_concurrent_rpcs and the max_workers used in the thread pool. If I want maximum_concurrent_rpcs=1, should I still provide more than one thread to the thread pool?

In other words, should I match maximum_concurrent_rpcs to my max_workers, or should I provide more workers than max concurrent RPCs?

server = grpc.server(
    thread_pool=futures.ThreadPoolExecutor(max_workers=1),
    maximum_concurrent_rpcs=1,
)

解决方案

If your server already processing maximum_concurrent_rpcs number of requests concurrently, and yet another request is received, the request will be rejected immediately.

If the ThreadPoolExecutor's max_workers is less than maximum_concurrent_rpcs then after all the threads get busy processing requests, the next request will be queued and will be processed when a thread finishes its processing.

I had the same question. To answer this, I debugged a bit what happens with maximum_concurrent_rpcs. The debugging went to py36/lib/python3.6/site-packages/grpc/_server.py in my virtualenv. Search for concurrency_exceeded. The bottom line is that if the server is already processing maximum_concurrent_rpcs and another request arrives, it will be rejected:

# ...
elif concurrency_exceeded:
    return _reject_rpc(rpc_event, cygrpc.StatusCode.resource_exhausted,
                        b'Concurrent RPC limit exceeded!'), None
# ...

I tried it with the gRPC Python Quickstart example:

In the greeter_server.py I modified the SayHello() method:

# ...
def SayHello(self, request, context):
    print("Request arrived, sleeping a bit...")
    time.sleep(10)
    return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)
# ...

and the serve() method:

def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), maximum_concurrent_rpcs=2)
    # ...

Then I opened 3 terminals and executed the client in them manually (as fast as I could using python greeter_client.py:

As expected, for the first 2 clients, processing of the request started immediately (can be seen in the server's output), because there were plenty of threads available, but the 3rd client got rejected immediately (as expected) with StatusCode.RESOURCE_EXHAUSTED, Concurrent RPC limit exceeded!.

Now to test what happens when there are not enough threads given to ThreadPoolExecutor I modified the max_workers to be 1:

server = grpc.server(futures.ThreadPoolExecutor(max_workers=1), maximum_concurrent_rpcs=2)

I ran my 3 clients again roughly the same time as previously.

The results is that the first one got served immediately. The second one needed to wait 10 seconds (while the first one was served) and then it was served. The third one got rejected immediately.

这篇关于gRPC Python thread_pool与max_concurrent_rpcs的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆