boost :: asio发送数据的速度比通过TCP接收数据的速度更快.或如何禁用缓冲 [英] boost::asio sending data faster than receiving over TCP. Or how to disable buffering

查看:376
本文介绍了boost :: asio发送数据的速度比通过TCP接收数据的速度更快.或如何禁用缓冲的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经创建了一个客户端/服务器程序,客户端启动 Writer类的实例,服务器启动的实例 读者班.然后,Writer将写入DATA_SIZE个字节的数据 每隔USLEEP毫秒与Reader异步.

I have created a client/server program, the client starts an instance of Writer class and the server starts an instance of Reader class. Writer will then write a DATA_SIZE bytes of data asynchronously to the Reader every USLEEP mili seconds.

Writer的每个连续async_write请求都已完成 仅当先前请求中的写时"处理程序具有 被呼叫.

Every successive async_write request by the Writer is done only if the "on write" handler from the previous request had been called.

问题是,如果Writer(客户端)正在将更多数据写入到 套接字比读取器(服务器)能够接收此似乎 成为行为:

The problem is, If the Writer (client) is writing more data into the socket than the Reader (server) is capable of receiving this seems to be the behaviour:

  • Writer将开始写入(我认为)系统缓冲区,甚至 尽管阅读器尚未接收到数据,但它将 正确地调用写入时"处理程序.

  • Writer will start writing into (I think) system buffer and even though the data had not yet been received by the Reader it will be calling the "on write" handler without an error.

当缓冲区已满时,boost :: asio不会触发写入时" 处理程序,直到缓冲区变小.

When the buffer is full, boost::asio won't fire the "on write" handler anymore, untill the buffer gets smaller.

与此同时,阅读器仍在接收小块 数据.

In the meanwhile, the Reader is still receiving small chunks of data.

我关闭后阅读器继续接收字节的事实 Writer程序似乎证明了这一理论是正确的.

The fact that the Reader keeps receiving bytes after I close the Writer program seems to prove this theory correct.

我需要实现的是防止这种缓冲,因为 数据需要尽可能实时".

What I need to achieve is to prevent this buffering because the data need to be "real time" (as much as possible).

我猜我需要使用一些套接字选项的组合, asio提供的功能,例如no_delay或send_buffer_size,但我只是在猜测 在这里,因为我没有成功尝试这些方法.

I'm guessing I need to use some combination of the socket options that asio offers, like the no_delay or send_buffer_size, but I'm just guessing here as I haven't had success experimenting with these.

我认为可以想到的第一个解决方案是使用 UDP代替TCP. ,因为我需要切换到 UDP也由于其他原因而在不久的将来,但是我会 首先,我只是想了解如何使用TCP进行操作 把它伸直在脑海里,以防我有类似的情况 将来某天出现问题.

I think that the first solution that one can think of is to use UDP instead of TCP. This will be the case as I'll need to switch to UDP for other reasons as well in the near future, but I would first like to find out how to do it with TCP just for the sake of having it straight in my head in case I'll have a similar problem some other day in the future.

注意1 :在开始在asio库中进行异步操作实验之前,我已经使用线程,锁和asio :: sockets实现了相同的方案,当时还没有遇到这种缓冲.我不得不切换到异步API,因为asio似乎不允许同步调用的定时中断.

NOTE1: Before I started experimenting with asynchronous operations in asio library I had implemented this same scenario using threads, locks and asio::sockets and did not experience such buffering at that time. I had to switch to the asynchronous API because asio does not seem to allow timed interruptions of synchronous calls.

注意2 :这是一个演示问题的有效示例: http://pastie.org/3122025

NOTE2: Here is a working example that demonstrates the problem: http://pastie.org/3122025

编辑:我又做了一个测试,在我的NOTE1中,我提到当我使用asio :: iosockets时,我没有遇到这种缓冲.因此,我想确定并创建了此测试: http://pastie.org/3125452 事实证明,使用asio :: iosockets 存在缓冲事件,因此肯定有其他原因导致缓冲运行平稳,可能会降低FPS.

EDIT: I've done one more test, in my NOTE1 I mentioned that when I was using asio::iosockets I did not experience this buffering. So I wanted to be sure and created this test: http://pastie.org/3125452 It turns out that the buffering is there event with asio::iosockets, so there must have been something else that caused it to go smoothly, possibly lower FPS.

推荐答案

由于大多数网络应用程序打算在主机之间传输数据,因此TCP/IP绝对适合最大吞吐量.在这种情况下,预计N字节的传输将花费T秒,并且显然接收方处理数据的速度是否慢也无关紧要.实际上,正如您所注意到的,TCP/IP协议实现了滑动窗口,该窗口允许发送方缓冲某些数据,以便始终准备好发送数据,但将最终的节流控制权交给接收方.接收器可以全速,自动调整速度甚至暂停传输.

TCP/IP is definitely geared for maximizing throughput as intention of most network applications is to transfer data between hosts. In such scenarios it is expected that a transfer of N bytes will take T seconds and clearly it doesn't matter if receiver is a little slow to process data. In fact, as you noticed TCP/IP protocol implements the sliding window which allows the sender to buffer some data so that it is always ready to be sent but leaves the ultimate throttling control up to the receiver. Receiver can go full speed, pace itself or even pause transmission.

如果您不需要吞吐量,而是想保证发送方正在传输的数据尽可能接近实时,那么您需要确保发送方在他发送之前不写入下一个数据包从接收器接收到一个确认,表明它已经处理了先前的数据包.因此,不要盲目地逐个发送数据包直到被阻止,而要为控制消息定义一个消息结构,以便将控制消息从接收方发送回发送方.

If you don't need throughput and instead want to guarantee that the data your sender is transmitting is as close to real time as possible, then what you need is to make sure the sender doesn't write the next packet until he receives an acknowledgement from the receiver that it has processed the previous data packet. So instead of blindly sending packet after packet until you are blocked, define a message structure for control messages to be sent back from the receiver back to the sender.

显然,采用这种方法时,您需要权衡的是每个发送的数据包都更接近发送方的实时性,但是您限制了可以传输的数据量,同时略微增加了协议使用的总带宽(即其他控制消息) .还请记住,接近实时"是相对的,因为您仍将面临网络延迟以及接收器处理数据的能力.因此,您还可以查看特定应用程序的设计约束条件,以确定您真正需要关闭的程度.

Obviously with this approach, your trade off is that each sent packet is closer to real-time of the sender but you are limiting how much data you can transfer while slightly increasing total bandwidth used by your protocol (i.e. additional control messages). Also keep in mind that "close to real-time" is relative because you will still face delays in the network as well as ability of the receiver to process data. So you might also take a look at the design constraints of your specific application to determine how "close" do you really need to be.

如果您需要非常靠近,但同时又不在乎是否由于旧数据包数据被新数据取代而导致数据包丢失,那么UDP/IP可能是更好的选择.但是,a)如果您有可靠的交付要求,则可能最终会重塑tcp/ip的一部分,并且b)请记住,某些网络(公司防火墙)倾向于阻止UDP/IP,同时允许TCP/IP流量和c ),即使UDP/IP也不是完全实时的.

If you need to be very close, but at the same time you don't care if packets are lost because old packet data is superseded by new data, then UDP/IP might be a better alternative. However, a) if you have reliable deliver requirements, you might ends up reinventing a portion of tcp/ip's wheel and b) keep in mind that certain networks (corporate firewalls) tend to block UDP/IP while allowing TCP/IP traffic and c) even UDP/IP won't be exact real-time.

这篇关于boost :: asio发送数据的速度比通过TCP接收数据的速度更快.或如何禁用缓冲的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆