在Boost.Asio请求期间检测中止的连接 [英] Detect aborted connection during Boost.Asio request

查看:1199
本文介绍了在Boost.Asio请求期间检测中止的连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


可能重复:

如何检查Boost.Asio中的套接字是否关闭?


有没有一个确定的方法来确定TCP连接的另一端是否在asio框架中关闭而不发送任何数据?



对于服务器进程使用Boost.asio,如果客户端超时或以其他方式在服务器响应请求之前断开连接,则服务器在完成请求并生成响应发送之后才会找到此信息,当发送立即生成连接中止的错误。



对于一些长期运行的请求,这可能导致客户端取消和重试一遍又一遍,堆积许多实例相同的请求并行运行,使它们需要更长时间,并雪球雪崩,使服务器不可用。基本上反复F5是拒绝服务攻击。



不幸的是,我不能开始发送响应,直到请求完成,所以流结果是不是一个选项,我需要能够在请求处理期间检查关键点,并停止处理,如果客户端放弃了。

解决方案

此问题的关键是避免在接收处理程序中执行请求处理。以前,我做的像这样:

  async_receive(...,recv_handler)

recv_handler(错误){
if(!error){
解析输入
过程输入
async_send(response,...)



而是,相应的模式更像这样:

  async_receive(...,recv_handler)

void async_recv(error){
if(error){
cancelled_flag = true;
} else {
//启动处理事件
if(request_in_progress){
从输入缓冲区捕获输入
io_service.post(process_input)
}
//发送另一个读取请求
async_receive(...,recv_handler)
}
}

void process_input(){
while (!done&&!cancelled_flag){
过程输入
}
async_send(response,...)
}

p>

显然,我已经省略了很多细节,但重要的是将处理作为一个单独的事件发布在io_service线程池中,可以同时运行附加接收。这允许在处理正在进行时接收连接中止消息。然而,请注意,这意味着两个线程必须必须彼此通信,需要某种同步,并且正在处理的输入必须与放置接收调用的输入缓冲区分开保存,因为更多的数据可能到达

编辑:



我还应该注意,如果您收到更多的数据处理正在发生,你可能不想再启动另一个异步处理调用。这可能是后面的处理可以首先完成,并且结果可以被无序地发送到客户端。



这里有一些伪代码:

  async_read(=> read_complete)
read_complete
将新数据存储在队列中
如果当前没有处理
如果一个完整的请求在队列中
async_process(=> process_complete)
else忽略现在的数据
async_read(=> read_complete)
async_process(=> process_complete)
过程数据
process_complete
async_write_result(=> write_complete)
write_complete
如果队列中有完整请求
async_process(=> process_complete)

因此,如果在请求正在进行时接收到数据,则它会排队等候,但不会处理。一旦处理完成并且结果被发送,我们可以用先前接收的数据再次开始处理。



这可以通过允许处理发生而进一步优化而前一个请求的结果正在写入,但这需要更仔细确保结果以与接收请求相同的顺序写入。


Possible Duplicate:
How to check if socket is closed in Boost.Asio?

Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data?

Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error.

For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack.

Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

解决方案

The key to this problem is to avoid doing request processing in the receive handler. Previously, I was doing something like this:

async_receive(..., recv_handler)

void recv_handler(error) {
    if (!error) {
        parse input
        process input
        async_send(response, ...)

Instead, the appropriate pattern is more like this:

async_receive(..., recv_handler)

void async_recv(error) {
    if (error) {
        canceled_flag = true;
    } else {
        // start a processing event
        if (request_in_progress) {
            capture input from input buffer
            io_service.post(process_input)
        }
        // post another read request
        async_receive(..., recv_handler)
    }
}

void process_input() {
    while (!done && !canceled_flag) {
        process input
    }
    async_send(response, ...)
}

Obviously I have left out lots of detail, but the important part is to post the processing as a separate "event" in the io_service thread pool so that an additional receive can be run concurrently. This allows the "connection aborted" message to be received while processing is in progress. Be aware, however, that this means two threads must necessarily communicate with each other requiring some kind of synchronization and the input that's being processed must be kept separately from the input buffer into which the receive call is being placed, since more data may arrive due to the additional read call.

edit:

I should also note that, should you receive more data while the processing is happening, you probably do not want to start another asynchronous processing call. It's possible that this later processing could finish first, and the results could be sent to the client out-of-order. Unless you're using UDP, that's likely a serious error.

Here's some pseudo-code:

async_read (=> read_complete)
read_complete
    store new data in queue
    if not currently processing
         if a full request is in the queue
             async_process (=> process_complete)
    else ignore data for now
    async_read (=> read_complete)
async_process (=> process_complete)
    process data
process_complete
    async_write_result (=> write_complete)
write_complete
    if a full request is in the queue
        async_process (=> process_complete)

So, if data is received while a request is in process, it's queued up but not processed. Once processing completes and the result is sent, then we may start processing again with the data that was received earlier.

This can be optimized a bit more by allowing processing to occur while the result of the previous request is being written, but that requires even more care to ensure that the results are written in the same order as the requests were received.

这篇关于在Boost.Asio请求期间检测中止的连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆