带UNIX套接字的同步Boost Asio中的延迟/延迟 [英] Delay/latency in synchronous boost asio with unix socket
问题描述
我编写了一个客户端-服务器应用程序,该应用程序在服务器端使用了异步Boost asio网络( boost :: asio :: async_read
)和客户端上的同步调用( boost :: asio :: write
和 boost :: asio :: read
).因为在下面使用协议缓冲区,所以如果要从客户端发送缓冲区,则首先发送有效负载大小,然后在第二个调用有效负载主体.客户端的伪代码:
I write a client-server app which uses asynchronous boost asio networking (boost::asio::async_write
and boost::asio::async_read
) on server side and synchronous calls (boost::asio::write
and boost::asio::read
) on the client end. Because underneath I use protocol buffers, if I want to send a buffer from the client, first I send the payload size, then in the second call the payload body. Pseudocode for the client end:
void WriteProtobuf( std::string && body )
{
boost::system::error_code ec;
std::size_t dataSize = body.size();
// send the size
boost::asio::write( socket, boost::asio::buffer( reinterpret_cast<const char *>( &dataSize ), sizeof( dataSize ) ), ec );
// send the body
boost::asio::write( socket, boost::asio::buffer( body.data(), body.size() ), ec );
}
服务器端的伪代码:
void ReadProtobuf()
{
std::size_t requestSize;
std::string body;
// read the size
boost::asio::async_read( socket, boost::asio::buffer( &requestSize, sizeof( requestSize ) ), [&requestSize, &body]() { // read the size
body.resize( requestSize );
// read the body
boost::asio::async_read( socket, boost::asio::buffer( body.data(), body.size() ), []() {
/* ... */
});
});
}
现在,它可以正常工作,但是我在第二次boost :: asio:write调用中观察到〜40ms的延迟.我找到了一个简单但不干净的解决方案来解决它.我添加了确认"在客户端的写调用之间从服务器发送的字节:
Now, it works just fine, but I observe a ~40ms latency in the second boost::asio:write call. I found an easy but not clean solution to work it around. I added the "confirmation" byte send from the server between the calls of write from client:
客户端的伪代码:
void WriteProtobuf( std::string && body )
{
boost::system::error_code ec;
std::size_t dataSize = body.size();
// send the size
boost::asio::write( socket, boost::asio::buffer( reinterpret_cast<const
char *>( &dataSize ), sizeof( dataSize ) ), ec );
char ackByte;
// read the ack byte
boost::asio::read( socket, boost::asio::buffer( ackByte, sizeof( ackByte ) ), ec );
// send the body
boost::asio::write( socket, boost::asio::buffer( body.data(), body.size() ), ec );
}
服务器端的伪代码:
void ReadProtobuf()
{
std::size_t requestSize;
std::string body;
// read the size
boost::asio::async_read( socket, boost::asio::buffer( &requestSize, sizeof( requestSize ) ), [&requestSize, &body]() { // read the size
body.resize( requestSize );
char ackByte = 0;
// write the ack byte
boost::asio::async_write( socket, boost::asio::buffer( &ackByte, sizeof( ackByte ), []() {
// read the body
boost::asio::async_read( socket, boost::asio::buffer( body.data(), body.size() ), []() {
/* ... */
});
});
});
}
这消除了延迟,但是我仍然会摆脱不必要的通信,并且更好地理解为什么会这样发生.
This removes the latency but still I would get rid of unnecessary communication and understand better why is it happening this way.
推荐答案
另一方面,无法选择在数据开头粘贴大小,因为那样的话我会进行复制.
On the other hand glueing size at the beginning of the data isn’t an option, because then I would do a copy.
分散救援:因此,这可能会有所帮助:
So, this could help:
void WriteProtobuf(std::string const& body) {
std::size_t dataSize = body.size();
std::vector<asio::const_buffer> bufs {
asio::buffer(&dataSize, sizeof(dataSize)),
asio::buffer(body.data(), body.size())
};
boost::system::error_code ec;
write(socket, asio::buffer(bufs), ec);
}
使用Protobuf
但是,由于您使用的是Protobuf,因此请考虑不要序列化为字符串,而是使用对大小前缀的流序列化的内置支持:
Use Protobuf
However, since you are using Protobuf, consider not serializing to a string, but using the builtin support for size-prefixed stream serialization:
void WriteProtobuf(::google::protobuf::Message const& msg) {
std::string buf;
google::protobuf::io::StringOutputStream sos(&buf);
msg.SerializeToZeroCopyStream(&sos);
boost::system::error_code ec;
write(socket, asio::buffer(buf), ec);
}
然后,在接收端,您可以使用流来读取消息,直到完成消息为止.参见例如 https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/coded-input-stream
On the receiving end you can then use the streams to read until the message is complete. See e.g. https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/coded-input-stream
如果这实际上没有帮助,那么您可以查看套接字文件描述符上的显式刷新:
If this doesn't actually help, then you could look into explicitly flushing on the socket file descriptor:
https://stackoverflow.com/a/855597/85371
例如,
::fsync(socket.native_handle());
这篇关于带UNIX套接字的同步Boost Asio中的延迟/延迟的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!