提高1.55 ASIO TCP cpp03 chat_server例如内存泄漏 [英] boost 1.55 asio tcp cpp03 chat_server example memory leaks

查看:146
本文介绍了提高1.55 ASIO TCP cpp03 chat_server例如内存泄漏的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望有人能够给我一个线索在哪里调查...

i hope someone could give me a clue where to investigate...

我正在从提振chat_server例如

i'm running the chat_server example from boost

http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/example/cpp03/chat/chat_server.cpp

在Visual Studio 2010中,窗10,我从下载的二进制提升:

on visual studio 2010, windows 10 and i downloaded boost binaries from :

<一个href=\"http://sourceforge.net/projects/boost/files/boost-binaries/1.55.0/boost_1_55_0-msvc-10.0-32.exe/download\" rel=\"nofollow\">http://sourceforge.net/projects/boost/files/boost-binaries/1.55.0/boost_1_55_0-msvc-10.0-32.exe/download

我使用的脚本来模拟30个TCP客户端,每个线程的行为基本上是:

i used a script to simulate 30 tcp clients, each thread behavior basically:


  1. 连接到TCP服务器

  2. 开始一个循环

  3. 将消息发送到TCP服务器

  4. 从TCP服务器收到消息

  5. 睡眠

  6. 回到步骤2

奇怪的事实是,当我使用Windows任务管理器来监视内存消耗。从列的私人工作设置共享的工作集保持稳定了近18分钟,这些数字之后的私人工作集开始的值一分钟增加近5 MB。

the strange fact is when i use windows task manager to monitor the memory consumption. The numbers from columns private working set and shared working set remains "stable" for almost 18 minutes and after that the values of private working set starts to increase almost 5 MB by minute.

所以,我的疑惑是:


  1. 有谁见过类似的事情之前?

  2. 这是什么原因?

问候

推荐答案

服务器保留聊天记录,但只有100处于ringbuffer(最近的消息实际上是一个双端&LT; chat_message&GT; )。

The server retains chat history, but only the 100 most recent messages in a "ringbuffer" (actually a deque<chat_message>).

有大量的客户端做了很多聊天的确实检测:

Indeed testing with a large number of clients doing a lot of chatting:

(for c in {00..99}; do for a in {001..999}; do sleep .1; echo "Client $c message $a"; done | ./chat_client localhost 6767& done)

显示内存增加:

在这里输入的形象描述

击穿表明它是由于分配,从提供 _write_msgs _ 这也是一个队列。

The breakdown indicates it's due to allocations from deliver for _write_msgs_ which is also a queue.

3.4 GiB: std::deque<chat_message, std::allocator<chat_message> >::_M_push_back_aux(chat_message const&) (new_allocator.h:104)
3.4 GiB: chat_session::deliver(chat_message const&) (stl_deque.h:1526)

这逻辑上并不长,虽然如此,它会出现有一些不幸的行为。

It doesn't logically grow, though, so it would appear there's some unfortunate behaviour.

让我们研究一下:

在总共试运行(如上图所示),任何一届会议的最大写入队列深度为60。

On a total test run (shown above) the max write queue depth for any session is 60.

一旦所有的客户端的重新启动(无需重新加载服务器),队列深度增加到100立即出于显而易见的原因(所有的客户得到的100个项目在一次交付的全部历史)¹。

Upon restart of all the client (without reloading the server), the queue depth increases to 100 immediately for obvious reasons (all clients get the full history of 100 items delivered at once)¹.

添加在 chat_session pop_front 来电后 shrink_to_fit 通话$ C>不会使行为(当然除了一个事实,即C ++ 03没有 shrink_to_fit )更好。

Adding a call to shrink_to_fit after each pop_front call in chat_session doesn't make the the behaviour any better (apart from the fact that c++03 doesn't have shrink_to_fit of course).

删除了的boost :: circular_buffer 而不是的std :: deque的,奇怪到达的队列深度100轻松,甚至在第一次运行,但它的确实更改内存配置文件的显着

Dropping in a boost::circular_buffer instead of the std::deque, strangely reaches a queue depth of 100 easily, even on the first run, but it does change the memory profile dramatically:

在这里输入的形象描述

显然,有一些不理想有关使用双端队列为...双端队列o.O这是非常令人吃惊。我会尝试用的libc ++来代替:

Clearly, there's something suboptimal about using deque as... a double-ended queue o.O That's very surprising. I'll try with libc++ instead:

有趣的是,的std :: deque的&LT;&GT; 和收缩到合适的libc ++显示不同的 - 仍然很糟糕 - 曲线。还要注意其报告日益增长的 _write_msgs _ 队列深度。不知怎的,它的行为真的很不同...... o.O

Interestingly, with std::deque<> and shrink-to-fit libc++ shows a different - still bad -curve. Note also it report ever-growing _write_msgs_ queue depths. Somehow it behaves really differently... o.O

在这里输入的形象描述

¹甚至以为客户立即开始咯咯笑为好,队列深度不超出100 - 所以吞吐量仍然细

¹ Even thought the client immediately start cackling as well, the queue depth doesn't go beyond 100 - so throughput is still fine.

这篇关于提高1.55 ASIO TCP cpp03 chat_server例如内存泄漏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆