为什么在python中向asyncio服务器发送多个请求的时间增加了? [英] why is time rising for more than one request to asyncio server in python?

查看:171
本文介绍了为什么在python中向asyncio服务器发送多个请求的时间增加了?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个带有套接字的pythonic服务器.应当同时(并行)接收请求并并行响应. 当我向它发送多个请求时,答复时间比我预期的要多.

I wrote a pythonic server with socket. that should receives requests at the same time(parallel) and respond them parallel. When i send more than one request to it, the time of answering increase more than i expected.

服务器:

import datetime
import asyncio, timeit
import json, traceback
from asyncio import get_event_loop

requestslist = []
loop = asyncio.get_event_loop()

async def handleData(reader, writer):
    message = ''
    clientip = ''
    data = bytearray()
    print("Async HandleData", datetime.datetime.utcnow())


    try:
        start = timeit.default_timer()
        data = await reader.readuntil(separator=b'\r\n\r\n')
        msg = data.decode(encoding='utf-8')
        len_csharp_message = int(msg[msg.find('content-length:') + 15:msg.find(';dmnid'):])
        data = await reader.read(len_csharp_message)
        message = data.decode(encoding='utf-8')

        clientip = reader._transport._extra['peername'][0]
        clientport = reader._transport._extra['peername'][1]
        print('\nData Received from:', clientip, ':', clientport)
        if (clientip, message) in requestslist:
            reader._transport._sock.close()

        else:
            requestslist.append((clientip, message))

            # adapter_result = parallel_members(message_dict, service, dmnid)
            adapter_result = '''[{"name": {"data": "data", "type": "str"}}]'''
            body = json.dumps(adapter_result, ensure_ascii=False)
            print(body)

            contentlen = len(bytes(str(body), 'utf-8'))
            header = bytes('Content-Length:{}'.format(contentlen), 'utf-8')
            result = header + bytes('\r\n\r\n{', 'utf-8') + body + bytes('}', 'utf-8')
            stop = timeit.default_timer()
            print('total_time:', stop - start)
            writer.write(result)
            writer.close()
        writer.close()
        # del writer
    except Exception as ex:
        writer.close()
        print(traceback.format_exc())
    finally:
        try:
            requestslist.remove((clientip, message))
        except:
            pass


def main(*args):
    print("ready")
    loop = get_event_loop()
    coro = asyncio.start_server(handleData, 'localhost', 4040, loop=loop, limit=204800000)
    srv = loop.run_until_complete(coro)
    loop.run_forever()


if __name__ == '__main__':
    main()

当我发送单个请求时,它消耗了0.016秒. 但如果有更多要求,这次会增加.

When i send single request, it tooke 0.016 sec. but for more request, this time increase.

cpu信息: 英特尔至强x5650

cpu info : intel xeon x5650

客户端:

import multiprocessing, subprocess
import time
from joblib import Parallel, delayed


def worker(file):
    subprocess.Popen(file, shell=False)


def call_parallel (index):
    print('begin ' , index)
    p = multiprocessing.Process(target=worker(index))
    p.start()
    print('end ' , index)

path = r'python "/test-Client.py"'     # ## client address
files = [path, path, path, path, path, path, path, path, path, path, path, path]
Parallel(n_jobs=-1, backend="threading")(delayed(call_parallel)(i) for index,i  in  enumerate(files))

对于此同步发送12个请求的客户端,每个请求的总时间为0.15秒.

for this client that send 12 requests synchronous, total time for per request is 0.15 sec.

我希望有任何数量的请求,时间都是固定的.

I expect for any number requests, the time be fixed.

推荐答案

什么是请求

单个请求(粗略地说)包括以下步骤:

What is request

Single request (roughly saying) consists of the following steps:

  1. 将数据写入网络
  2. 浪费时间等待答案
  3. 从网络中阅读答案

№1/№3由您的CPU快速处理.步骤№2-是从PC到某台服务器(例如,在另一个城市)并通过电线返回的字节行程:通常花费更多时间.

№1/№3 processed by your CPU very fast. Step №2 - is a bytes journey from your PC to some server (in another city, for example) and back by wires: it usually takes much more time.

就处理而言,异步请求并不是真正的并行":仍然是您的单个CPU内核一次可以处理一件事.但是运行多个异步请求使您可以使用某些请求的步骤№2来执行其他请求的步骤№1/№3,而不仅仅是浪费大量时间.这就是为什么多个异步请求通常会比相同数量的同步请求更早完成的原因.

Asynchronous requests are not really "parallel" in terms of processing: it's still your single CPU core that can process one thing at a time. But running multiple async requests allows you to use step №2 of some request to do steps №1/№3 of other request instead of just wasting huge amount of time. That's a reason why multiple async requests usually would finish earlier then same amount of sync ones.

但是,当您在本地运行时,步骤№2并不需要花费很多时间:您的PC和服务器是同一件事,并且字节不会占用网络资源.在步骤№2中没有时间可以用来启动新请求.每次只有一个CPU内核可以处理一件事情.

But when you run things locally, step №2 doesn't take much time: your PC and server are the same thing and bytes don't go to network journey. There is just no time that can be used in step №2 to start new request. Only your single CPU core works processing one thing at a time.

您应该针对延迟答复的服务器测试请求,以查看期望的结果.

You should test requests against server that answers with some delay to see results you expect.

这篇关于为什么在python中向asyncio服务器发送多个请求的时间增加了?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆