Python的ASYNCIO任务得到了坏产量 [英] Python asyncio task got bad yield

查看:1354
本文介绍了Python的ASYNCIO任务得到了坏产量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我感到困惑如何在Python 3.4玩的 ASYNCIO 模块。我有一个搜索引擎搜索 API,并希望每个搜索请求来运行或者平行,或异步,这样我就不必等待一个搜索完成启动另一个。

下面是我的高层次的搜索API构建的原始搜索结果中的一些对象。搜索引擎本身是使用某种ASYNCIO机制,所以我不会打扰。

 #没有ASYNCIO模块现在这里使用
搜索类(对象):
  ...
  self.s = some_search_engine()
  ...
  高清搜索(个体经营,* ARGS,** kwargs):
    RET = {}
    #做根据指定参数和kwargs部分原料搜索和建设成果包裹
    ...
    RET回报

要尽量异步请求,我写了下面的测试案例来测试我如何相互作用我的东西与 ASYNCIO 模块。

 #这里是我的测试脚本
@ asyncio.coroutine
高清处理(F,* ARGS,** kwargs):
  R =从步骤f产量(* ARGS,** kwargs)
  回归řS =搜索()
循环= asyncio.get_event_loop()
loop.run_until_complete(手柄(s.searching,ARG1,ARG2,...))
loop.close()

通过与pytest运行时,它会返回一个 RuntimeError:任务得到了坏产量:{从结果搜索...} ,当它击中行 R =从...产量

我也试过另一种方式。

 #同上手柄
高清处理(..):
  ....
S =搜索()
循环= asyncio.get_event_loop()
任务= [
        asyncio.async(手柄(s.searching,arg11,arg12,...)),
        asyncio.async(手柄(s.searching,arg21,arg22,...)),
        ...
        ]
loop.run_until_complete(asyncio.wait(任务))
loop.close()

通过由pytest运行此测试时,它传递但是从搜索引擎的一些奇怪的异常会提高。它说未来/任务异常从未检索

东西我想请问:


  1. 对于我的第一次尝试,是一个正确的方式来使用收益率从,通过从函数调用返回的实际结果如何呢?

  2. 我想我需要一些睡眠添加到我的第二个测试案例等待任务完成,但我应该怎么做呢?我怎样才能让我的函数调用我的第二个测试案例来?

  3. 是一个很好的方式来实现ASYNCIO与现有的模块,通过创建一个异步处理程序来处理请求?

  4. 如果对问题2是NO,确实每一个客户端调用的类搜索需要包括环路= get_event_loop()这种东西来异步请求?


解决方案

的问题是,你不能只是调用现有的同步code作为如果它是一个 asyncio.coroutine 并获得异步行为。当您从搜索调用收益率(...),你只会得到异步行为,如果搜索本身实际上是一个 asyncio.coroutine ,或至少将返回 asyncio.Future 。眼下,搜索只是一个普通的同步功能,所以从搜索调用收益率(...)只是要抛出一个错误,因为它不返回未来或协同程序。

要得到你想要的行为,你需要在除了同步的异步版本搜索 C>版本(或刚落的同步版本完全,如果你不需要它)。你有几个选项来支持:


  1. 重写搜索 asyncio.coroutine ,它使用 ASYNCIO 兼容调用来完成其I / O,而不是阻塞I / O。这将使它在工作ASYNCIO 上下文,但它意味着你将不能够在同步方面再直接调用它。相反,你还需要提供一个替代同步搜索一个启动 ASYNCIO 事件循环,并呼吁<$ 方法C $ C>返回loop.run_until_complete(self.searching(...))。见<一href=\"http://stackoverflow.com/questions/30155138/how-can-i-write-asyncio-coroutines-that-optionally-act-as-regular-functions\">this问题了解关于它的更多细节。

  2. 请您的实施同步搜索,并提供使用<一个替代异步API href=\"https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_in_executor\"相对=nofollow> BaseEventLoop.run_in_executor 以在后台线程中运行你的搜索方法:

     类搜索(对象):
      ...
      self.s = some_search_engine()
      ...
      高清搜索(个体经营,* ARGS,** kwargs):
        RET = {}
        ...
        RET回报   @ asyncio.coroutine
       高清searching_async(个体经营,* ARGS,** kwargs):
          循环= kwargs.get('循环',asyncio.get_event_loop())
          尝试:
              德尔kwargs ['循环']#假设搜索不采取循环为ARG
          除了KeyError异常:
              通过
          R =收益率从loop.run_in_executor(无,self.searching,*参数)#传递没有告诉ASYNCIO使用默认的ThreadPoolExecutor
          回归ř

    测试脚本:

      S =搜索()
    循环= asyncio.get_event_loop()
    loop.run_until_complete(s.searching_async(ARG1,ARG2,...))
    loop.close()

    这样的话,你可以保持您同步code原样,并且至少提供了可以在 ASYNCIO $ C $使用方法C,无阻挡的情况下循环。它不是一个干净的解决方案,如果你真的在你的code使用异步I / O,但它总比没有好这将是。


  3. 提供两个完全不同的版本搜索,一个使用阻塞I / O,另一个是 ASYNCIO - 兼容。这给出了两种情况下的理想实现,但需要两倍的工作。

I am confused about how to play around with the asyncio module in Python 3.4. I have a searching API for a search engine, and want to each search request to be run either parallel, or asynchronously, so that I don't have to wait for one search finish to start another.

Here is my high-level searching API to build some objects with the raw search results. The search engine itself is using some kind of asyncio mechanism, so I won't bother with that.

# No asyncio module used here now
class search(object):
  ...
  self.s = some_search_engine()
  ...
  def searching(self, *args, **kwargs):
    ret = {}
    # do some raw searching according to args and kwargs and build the wrapped results
    ...
    return ret

To try to async the requests, I wrote following test case to test how I can interact my stuff with the asyncio module.

# Here is my testing script
@asyncio.coroutine
def handle(f, *args, **kwargs):
  r = yield from f(*args, **kwargs)
  return r

s = search()
loop = asyncio.get_event_loop()
loop.run_until_complete(handle(s.searching, arg1, arg2, ...))
loop.close()

By running with pytest, it will return a RuntimeError: Task got bad yield : {results from searching...}, when it hits the line r = yield from ....

I also tried another way.

# same handle as above
def handle(..):
  ....
s = search()
loop = asyncio.get_event_loop()
tasks = [
        asyncio.async(handle(s.searching, arg11, arg12, ...)),
        asyncio.async(handle(s.searching, arg21, arg22, ...)),
        ...
        ]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()

By running this test case by pytest, it passes but some weird exception from the search engine will raise. And it says Future/Task exception was never retrieved.

Things I wish to ask:

  1. For my 1st try, is that the right way to use yield from, by returning the actual result from a function call?
  2. I think I need to add some sleep to my 2nd test case to wait for the task finish, but how should I do that? And how can I get my function calls to return in my 2nd test case?
  3. Is that a good way to implement asyncio with an existing module, by creating an async handler to handle requests?
  4. If the answer to question 2 is NO, does every client calls to the class search needs to include loop = get_event_loop() this kind of stuffs to async the requests?

解决方案

The problem is that you can't just call existing synchronous code as if it was an asyncio.coroutine and get asynchronous behavior. When you call yield from searching(...), you're only going to get asynchronous behavior if searching itself is actually an asyncio.coroutine, or at least returns an asyncio.Future. Right now, searching is just a regular synchronous function, so calling yield from searching(...) is just going to throw an error, because it doesn't return a Future or coroutine.

To get the behavior you want, you'll need to have an asynchronous version of searching in addition to a synchronous version (or just drop the synchronous version altogether if you don't need it). You have a few options to support both:

  1. Rewrite searching as an asyncio.coroutine that it uses asyncio-compatible calls to do its I/O, rather than blocking I/O. This will make it work in an asyncio context, but it means you won't be able to call it directly in a synchronous context anymore. Instead, you'd need to also provide an alternative synchronous searching method that starts an asyncio event loop and calls return loop.run_until_complete(self.searching(...)). See this question for more details on that.
  2. Keep your synchronous implementation of searching, and provide an alternative asynchronous API that uses BaseEventLoop.run_in_executor to run your the searching method in a background thread:

    class search(object):
      ...
      self.s = some_search_engine()
      ...
      def searching(self, *args, **kwargs):
        ret = {}
        ...
        return ret
    
       @asyncio.coroutine
       def searching_async(self, *args, **kwargs):
          loop = kwargs.get('loop', asyncio.get_event_loop())
          try:
              del kwargs['loop']  # assuming searching doesn't take loop as an arg
          except KeyError:
              pass
          r = yield from loop.run_in_executor(None, self.searching, *args)  # Passing None tells asyncio to use the default ThreadPoolExecutor
          return r
    

    Testing script:

    s = search()
    loop = asyncio.get_event_loop()
    loop.run_until_complete(s.searching_async(arg1, arg2, ...))
    loop.close()
    

    This way, you can keep your synchronous code as is, and at least provide methods that can be used in asyncio code without blocking the event loop. It's not as clean a solution as it would be if you actually used asynchronous I/O in your code, but its better than nothing.

  3. Provide two completely separate versions of searching, one that uses blocking I/O, and one that's asyncio-compatible. This gives ideal implementations for both contexts, but requires twice the work.

这篇关于Python的ASYNCIO任务得到了坏产量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆