如何在http下载上实现不活动超时 [英] How can I implement an inactivity timeout on an http download

查看:41
本文介绍了如何在http下载上实现不活动超时的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在阅读http请求中可用的各种超时,它们似乎都是请求总时间的硬性截止期限.

I've been reading up on the various timeouts that are available on an http request and they all seem to act as hard deadlines on the total time of a request.

我正在运行http下载,我不希望在初次握手后实现超时,因为我对用户连接一无所知,并且不想在慢速连接时超时.我理想的情况是在一段时间不活动后超时(x秒钟未下载任何内容时).有什么办法可以作为内置文件执行此操作,还是必须根据文件说明进行中断?

I am running an http download, I don't want to implement a hard timeout past the initial handshake as I don't know anything about my users connection and don't want to timeout on slow connections. What I would ideally like is to timeout after a period of inactivity (when nothing has been downloaded for x seconds). Is there any way to do this as a built in or do I have to interrupt based on stating the file?

工作代码很难隔离,但是我认为这些是相关的部分,还有另一个循环来统计文件的状态以提供进度,但是我需要重构一下以使用它来中断下载:

The working code is a little hard to isolate but I think these are the relevant parts, there is another loop that stats the file to provide progress but I will need to refactor a bit to use this to interrupt the download:

// httspClientOnNetInterface returns an http client using the named network interface, (via proxy if passed)
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {

    log.Printf("Got IP addr : %s\n", string(interfaceIP))
    // create address for the dialer
    tcpAddr := &net.TCPAddr{
        IP: interfaceIP,
    }

    // create the dialer & transport
    netDialer := net.Dialer{
        LocalAddr: tcpAddr,
    }

    var proxyURL *url.URL
    var err error

    if httpsProxy != nil {
        proxyURL, err = url.Parse(httpsProxy.String())
        if err != nil {
            return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
        }
    }

    httpTransport := &http.Transport{
        Dial:  netDialer.Dial,
        Proxy: http.ProxyURL(proxyURL),
    }

    httpClient := &http.Client{
        Transport: httpTransport,
    }

    return httpClient, nil
}

/*
StartDownloadWithProgress will initiate a download from a remote url to a local file,
providing download progress information
*/
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {

    // start an http client on the selected net interface
    httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
    if err != nil {
        return nil, err
    }

    // grab the header
    headResp, err := httpClient.Head(srcURL)
    if err != nil {
        log.Printf("error on head request (download size): %s", err)
        return nil, err
    }

    // pull out total size
    size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
    if err != nil {
        headResp.Body.Close()
        return nil, err
    }
    headResp.Body.Close()

    errChan := make(chan error)
    doneChan := make(chan struct{})

    // spawn the download process
    go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
        resp, err := httpClient.Get(srcURL)
        if err != nil {
            errChan <- err
            return
        }
        defer resp.Body.Close()

        // create the file
        outFile, err := os.Create(dstFilepath)
        if err != nil {
            errChan <- err
            return
        }
        defer outFile.Close()

        log.Println("starting copy")
        // copy to file as the response arrives
        _, err = io.Copy(outFile, resp.Body)

        // return err
        if err != nil {
            log.Printf("\n Download Copy Error: %s \n", err.Error())
            errChan <- err
            return
        }

        doneChan <- struct{}{}

        return
    }(httpClient, srcURL, dstFilepath, errChan, doneChan)

    // return Download
    return (&Download{
        updateFrequency: time.Microsecond * 500,
        total:           size,
        errRecieve:      errChan,
        doneRecieve:     doneChan,
        filepath:        dstFilepath,
    }).Start(), nil
}

更新感谢所有对此投入的人.

Update Thanks to everyone who had input into this.

我已经接受了JimB的回答,因为这似乎是一种完全可行的方法,它比我选择的解决方案更加笼统(并且可能对在这里找到路的任何人都有用).

I've accepted JimB's answer as it seems like a perfectly viable approach that is more generalised than the solution I chose (and probably more useful to anyone who finds their way here).

就我而言,我已经有一个循环来监视文件大小,因此当x秒钟没有变化时,我抛出了一个命名错误.对于我来说,通过现有的错误处理查找命名的错误并从此处重试下载要容易得多.

In my case I already had a loop monitoring the file size so I threw a named error when this did not change for x seconds. It was much easier for me to pick up on the named error through my existing error handling and retry the download from there.

我的方法可能会在后台至少使一个goroutine崩溃(我可能稍后会通过一些信号通知解决此问题),但是由于这是一个运行时间短的应用程序(其安装程序),因此可以接受(至少可以容忍)

I probably crash at least one goroutine in the background with my approach (I may fix this later with some signalling) but as this is a short running application (its an installer) so this is acceptable (at least tolerable)

推荐答案

手动复制并不特别困难.如果您不确定如何正确实现它,只需复制和修改io包中的几行即可满足您的需要(我只删除了 ErrShortWrite 子句,因为我们可以假定std库io.Writer实现正确)

Doing the copy manually is not particularly difficult. If you're unsure how to properly implement it, it's only a couple dozen lines from the io package to copy and modify to suit your needs (I only removed the ErrShortWrite clause, because we can assume that the std library io.Writer implementations are correct)

这是一个类似于复制工作的函数,该函数还具有取消上下文和空闲超时参数.每次成功读取,它都会向取消goroutine发出信号,以继续执行并启动新的计时器.

Here is a copy work-alike function, that also takes a cancelation context and an idle timeout parameter. Every time there is a successful read, it signals to the cancelation goroutine to continue and start a new timer.

func idleTimeoutCopy(dst io.Writer, src io.Reader, timeout time.Duration,
    ctx context.Context, cancel context.CancelFunc) (written int64, err error) { 
    read := make(chan int)
    go func() {
        for {
            select {
            case <-ctx.Done():
                return
            case <-time.After(timeout):
                cancel()
            case <-read:
            }
        }
    }()

    buf := make([]byte, 32*1024)
    for {
        nr, er := src.Read(buf)
        if nr > 0 {
            read <- nr
            nw, ew := dst.Write(buf[0:nr])
            written += int64(nw)
            if ew != nil {
                err = ew
                break
            }
        }
        if er != nil {
            if er != io.EOF {
                err = er
            }
            break
        }
    }
    return written, err
}

为简便起见,我使用 time.After 时,重用 Timer 效率更高.这意味着要小心使用正确的重置模式,因为 Reset 函数的返回值会被破坏:

While I used time.After for brevity, it's more efficient to reuse the Timer. This means taking care to use the correct reset pattern, as the return value of the Reset function is broken:

    t := time.NewTimer(timeout)
    for {
        select {
        case <-ctx.Done():
            return
        case <-t.C:
            cancel()
        case <-read:
            if !t.Stop() {
                <-t.C
            }
            t.Reset(timeout)
        }
    }

您可以在此处完全跳过调用 Stop ,因为我认为如果计时器在调用Reset时触发,那么它足够接近以至于可以取消,但是通常最好使代码惯用该代码将在将来进行扩展.

You could skip calling Stop altogether here, since in my opinion if the timer fires while calling Reset, it was close enough to cancel anyway, but it's often good to have the code be idiomatic in case this code is extended in the future.

这篇关于如何在http下载上实现不活动超时的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆