PyCurl请求在执行时无限挂起 [英] PyCurl request hangs infinitely on perform

查看:304
本文介绍了PyCurl请求在执行时无限挂起的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我编写了一个脚本来从Qualys获取扫描结果,以便每周收集一次以收集指标.

I have written a script to fetch scan results from Qualys to be run each week for the purpose of metrics gathering.

此脚本的第一部分涉及为上周进行的每次扫描获取参考列表,以进行进一步处理.

The first part of this script involves fetching a list of references for each of the scans that were run in the past week for further processing.

问题是,尽管有时这可以完美地工作,但其他时候脚本将挂在c.perform()行上.手动运行脚本时,这是可以管理的,因为可以重新运行它,直到它起作用为止.但是,我希望每周将其作为计划任务运行,而无需任何手动交互.

The problem is that, while this will work perfectly sometimes, other times the script will hang on the c.perform() line. This is manageable when running the script manually as it can just be re-run until it works. However, I am looking to run this as a scheduled task each week without any manual interaction.

是否有一种万无一失的方法可以检测到是否发生了挂起并重新发送PyCurl请求,直到它起作用为止?

Is there a foolproof way that I can detect if a hang has occurred and resend the PyCurl request until it works?

我尝试设置c.TIMEOUTc.CONNECTTIMEOUT选项,但是这些似乎无效.另外,由于不会引发异常,因此将其简单地放入try-except块也不会成功.

I have tried setting the c.TIMEOUT and c.CONNECTTIMEOUT options but these don't seem to be effective. Also, as no exception is thrown, simply putting it in a try-except block also won't fly.

有问题的功能如下:

# Retrieve a list of all scans conducted in the past week
# Save this to refs_raw.txt
def getScanRefs(usr, pwd):

    print("getting scan references...")

    with open('refs_raw.txt','wb') as refsraw: 
        today = DT.date.today()
        week_ago = today - DT.timedelta(days=7)
        strtoday = str(today)
        strweek_ago = str(week_ago)

        c = pycurl.Curl()

        c.setopt(c.URL, 'https://qualysapi.qualys.eu/api/2.0/fo/scan/?action=list&launched_after_datetime=' + strweek_ago + '&launched_before_datetime=' + strtoday)
        c.setopt(c.HTTPHEADER, ['X-Requested-With: pycurl', 'Content-Type: text/xml'])
        c.setopt(c.USERPWD, usr + ':' + pwd)
        c.setopt(c.POST, 1)
        c.setopt(c.PROXY, 'companyproxy.net:8080')
        c.setopt(c.CAINFO, certifi.where())
        c.setopt(c.SSL_VERIFYPEER, 0)
        c.setopt(c.SSL_VERIFYHOST, 0)
        c.setopt(c.CONNECTTIMEOUT, 3)
        c.setopt(c.TIMEOUT, 3)

        refsbuffer = BytesIO()
        c.setopt(c.WRITEDATA, refsbuffer)
        c.perform()

        body = refsbuffer.getvalue()
        refsraw.write(body)
        c.close()

    print("Got em!")

推荐答案

我自己解决了此问题,方法是使用multiprocessing启动单独的进程以在单独的进程中启动API调用,如果继续运行更长的时间,则将其终止并重新启动超过5秒.它不是很漂亮,但是是跨平台的.对于那些寻求更优雅但仅适用于* nix 的解决方案的人,请查看

I fixed the issue myself by launching a separate process using multiprocessing to launch the API call in a separate process, killing and restarting if it goes on for longer than 5 seconds. It's not very pretty but is cross-platform. For those looking for a solution that is more elegant but only works on *nix look into the signal library, specifically SIGALRM.

以下代码:

# As this request for scan references sometimes hangs it will be run in a separate thread here
# This will be terminated and relaunched if no response is received within 5 seconds
def performRequest(usr, pwd):
    today = DT.date.today()
    week_ago = today - DT.timedelta(days=7)
    strtoday = str(today)
    strweek_ago = str(week_ago)

    c = pycurl.Curl()

    c.setopt(c.URL, 'https://qualysapi.qualys.eu/api/2.0/fo/scan/?action=list&launched_after_datetime=' + strweek_ago + '&launched_before_datetime=' + strtoday)
    c.setopt(c.HTTPHEADER, ['X-Requested-With: pycurl', 'Content-Type: text/xml'])
    c.setopt(c.USERPWD, usr + ':' + pwd)
    c.setopt(c.POST, 1)
    c.setopt(c.PROXY, 'companyproxy.net:8080')
    c.setopt(c.CAINFO, certifi.where())
    c.setopt(c.SSL_VERIFYPEER, 0)
    c.setopt(c.SSL_VERIFYHOST, 0)

    refsBuffer = BytesIO()
    c.setopt(c.WRITEDATA, refsBuffer)
    c.perform()
    c.close()
    body = refsBuffer.getvalue()
    refsraw = open('refs_raw.txt', 'wb')
    refsraw.write(body)
    refsraw.close()

# Retrieve a list of all scans conducted in the past week
# Save this to refs_raw.txt
def getScanRefs(usr, pwd):

    print("Getting scan references...") 

    # Occasionally the request will hang infinitely. Launch in separate method and retry if no response in 5 seconds
    success = False
    while success != True:
        sendRequest = multiprocessing.Process(target=performRequest, args=(usr, pwd))
        sendRequest.start()

        for seconds in range(5):
            print("...")
            time.sleep(1)

        if sendRequest.is_alive():
            print("Maximum allocated time reached... Resending request")
            sendRequest.terminate()
            del sendRequest
        else:
            success = True

    print("Got em!")

这篇关于PyCurl请求在执行时无限挂起的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆