boost 1.55 asio tcp cpp03 chat_server示例内存泄漏 [英] boost 1.55 asio tcp cpp03 chat_server example memory leaks
问题描述
我希望有人能给我一个线索调查...
我正在运行来自boost的chat_server示例
细目表明这是由于交付
_write_msgs _
,这也是一个队列。
3.4 GiB:std :: deque< chat_message,std :: allocator< chat_message> > :: _ M_push_back_aux(chat_message const&)(new_allocator.h:104)
3.4 GiB:chat_session :: deliver(chat_message const&)(stl_deque.h:1526)
它不会在逻辑上增长,所以它会出现一些不幸的行为。
我们来调查:
在一个总测试运行(如上所示),任何会话的最大写入队列深度为60.
重新启动所有客户端(无需重新加载服务器)后,由于显而易见的原因,队列深度会立即增加到100(所有客户端都会一次传送100个项目的完整历史记录)¹。
新增
shrink_to_fit
添加对
shrink_to_fit的呼叫调用
后,每个
pop_front
当然,c ++ 03没有shrink_to_fit
)。
使用不同的容器
删除
boost :: circular_buffer
而不是std :: deque
奇怪地达到100的队列深度,即使在第一次运行,但 显着更改内存配置 :<
很明显,使用deque作为...一个双端队列oO这是非常令人惊讶。我会尝试使用libc ++代替:
使用libc ++代替
有趣的是, std :: deque<> 和shrink-to-fit libc ++显示了一个不同的 - 仍然是坏的曲线。注意,它报告不断增长的
_write_msgs _
队列深度。不知怎的,它的行为真的不同... oO
¹即使客户端也立即开始处理cackling,队列深度不会超过100 - 因此吞吐量仍然很好。
i hope someone could give me a clue where to investigate...
i'm running the chat_server example from boost
http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/example/cpp03/chat/chat_server.cpp
on visual studio 2010, windows 10 and i downloaded boost binaries from :
i used a script to simulate 30 tcp clients, each thread behavior basically:
- connect to tcp server
- start a loop
- send a message to tcp server
- receive a message from tcp server
- sleep
- back to step 2
the strange fact is when i use windows task manager to monitor the memory consumption. The numbers from columns private working set and shared working set remains "stable" for almost 18 minutes and after that the values of private working set starts to increase almost 5 MB by minute.
So my doubts are:
- Does anyone ever seen anything similar before?
- What could cause this?
Regards
解决方案The server retains chat history, but only the 100 most recent messages in a "ringbuffer" (actually a
deque<chat_message>
).Indeed testing with a large number of clients doing a lot of chatting:
(for c in {00..99}; do for a in {001..999}; do sleep .1; echo "Client $c message $a"; done | ./chat_client localhost 6767& done)
Shows a memory increase:
The breakdown indicates it's due to allocations from
deliver
for_write_msgs_
which is also a queue.3.4 GiB: std::deque<chat_message, std::allocator<chat_message> >::_M_push_back_aux(chat_message const&) (new_allocator.h:104) 3.4 GiB: chat_session::deliver(chat_message const&) (stl_deque.h:1526)
It doesn't logically grow, though, so it would appear there's some unfortunate behaviour.
Let's investigate:
On a total test run (shown above) the max write queue depth for any session is 60.
Upon restart of all the client (without reloading the server), the queue depth increases to 100 immediately for obvious reasons (all clients get the full history of 100 items delivered at once)¹.
Add
shrink_to_fit
Adding a call to
shrink_to_fit
after eachpop_front
call inchat_session
doesn't make the the behaviour any better (apart from the fact that c++03 doesn't haveshrink_to_fit
of course).Use a different container
Dropping in a
boost::circular_buffer
instead of thestd::deque
, strangely reaches a queue depth of 100 easily, even on the first run, but it does change the memory profile dramatically:Clearly, there's something suboptimal about using deque as... a double-ended queue o.O That's very surprising. I'll try with libc++ instead:
Using libc++ instead
Interestingly, with
std::deque<>
and shrink-to-fit libc++ shows a different - still bad -curve. Note also it report ever-growing_write_msgs_
queue depths. Somehow it behaves really differently... o.O
¹ Even thought the client immediately start cackling as well, the queue depth doesn't go beyond 100 - so throughput is still fine.
这篇关于boost 1.55 asio tcp cpp03 chat_server示例内存泄漏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!