如何在Python中进行非阻塞URL提取 [英] How to do a non-blocking URL fetch in Python

查看:77
本文介绍了如何在Python中进行非阻塞URL提取的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在用 Pyglet 编写一个GUI应用程序,该应用程序必须显示来自互联网的数十至数百个缩略图.现在,我正在使用 urllib.urlretrieve 来抓取它们,但是每次都阻塞,直到完成为止,一次只能抓一个.

I am writing a GUI app in Pyglet that has to display tens to hundreds of thumbnails from the Internet. Right now, I am using urllib.urlretrieve to grab them, but this blocks each time until they are finished, and only grabs one at a time.

我希望并行下载它们并在完成显示后立即显示每个,而不会在任何时候阻塞GUI.最好的方法是什么?

I would prefer to download them in parallel and have each one display as soon as it's finished, without blocking the GUI at any point. What is the best way to do this?

我对线程了解不多,但看起来像线程模块可能有帮助吗?也许有一些我忽略的简单方法.

I don't know much about threads, but it looks like the threading module might help? Or perhaps there is some easy way I've overlooked.

推荐答案

您可能会从threading multiprocessing 模块.实际上,您不需要自己创建所有这些基于Thread的类,有一种使用Pool.map的简单方法:

You'll probably benefit from threading or multiprocessing modules. You don't actually need to create all those Thread-based classes by yourself, there is a simpler method using Pool.map:

from multiprocessing import Pool

def fetch_url(url):
    # Fetch the URL contents and save it anywhere you need and
    # return something meaningful (like filename or error code),
    # if you wish.
    ...

pool = Pool(processes=4)
result = pool.map(f, image_url_list)

这篇关于如何在Python中进行非阻塞URL提取的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆