Python PyQT/PySide QThread限制 [英] Python PyQT/PySide QThread limiting

查看:294
本文介绍了Python PyQT/PySide QThread限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对线程限制有问题.我想使用QThread做到这一点.因此,SpiderThread是QThread对象,它在爬网一些URL.但我想一次将工作线程限制为X个线程.我之前用threadpool和QRunnable做到了,但是当URL数量很大时,它在pyside中存在问题.所以我有这个简单的代码:

I have problem with thread limiting. I want to do it using QThread. So SpiderThread is QThread object crawling some urls. But I want to limit working threads to X threads at once. I have done it earlier with threadpool and QRunnable but it's buggy in pyside when numbers of urls are big. So I have this simple code:

 self.threads = []
    for url in self.urls:
        th = SpiderThread(url)
        th.updateresultsSignal.connect(self.update_results)
        self.threads.append(th)
        th.start()

任何人都有使用QThread限制线程的有效示例吗?

Anyone have working example of limiting threads using QThread ?

推荐答案

那么您想在任何给定时间最多运行X个线程吗?那么,由10个线程共享的URL队列又如何:

So you want to have at most X threads running at any given time? So how about a URL queue shared by 10 threads:

self.threads = []
queueu = Queue(self.urls) # replace with a sync queue
for i in xrange(1,10):
    th = SpiderThread(queue)
    th.updateresultsSignal.connect(self.update_results)
    self.threads.append(th)
    th.start()

然后在每个线程的运行中,该线程将URL从队列中移出(因此将其从队列中删除),并且在处理完URL后,将获得一个新的URL.在伪代码中:

Then in the run of each thread, the thread gets a URL off the queue (so removes it from queue), and when it is done processing the URL, it gets a new one. In pseudocode:

class SpiderThread(Thread):
    def __init__(self, queue):
        self.queue = queue
    def run(self):
        while not self.queue.empty():
            maxWait = 100 # miliseconds
            try: 
                url = self.queue.get(true, maxWait)
                process(url)
            except Queue.Empty:
                break # no more URLs, work completed!

这篇关于Python PyQT/PySide QThread限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆