在循环缓冲器中实时观看数据 [英] Viewing data in a circular buffer in real-time

查看:219
本文介绍了在循环缓冲器中实时观看数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有消息的输入流,并要一个窗口,允许用户通过滚动查看消息。

I have an incoming stream of messages, and want a window that allows the user to scroll through the messages.

这是我目前的想法:


  • 接收邮件会进入单个生产者单个消费者队列

  • 一个线程读取出来,并将其置于一个循环缓冲区有顺序编号

  • 这样我可以有多个输入数据流安全地放置在循环缓冲器和它分离输入

  • 互斥协调UI和线程
  • 之间循环缓冲区访问
  • 从线程两个通知UI一个第一个ID,一个用于缓冲区中的最后一个ID当过任何变化。

  • 这使得UI找出它可以显示,它需要访问的循环缓冲区的部分,删除被覆盖的信息。它只访问的当前大小,以填补窗口和滚动位置所需要的信息。

  • Incoming messages go into single producer single consumer queue
  • A thread reads them out and places them into a circular buffer with a sequential id
  • This way I could have multiple incoming streams safely placed in the circular buffer and it decouples the input
  • Mutex to coordinate circular buffer access between the UI and the thread
  • Two notifications from thread to UI one for the first id and one for the last id in the buffer when ever either changes.
  • This allows the UI to figure out what it can display, which parts of the circular buffer it needs access, delete overwritten messages. It only accesses the messages required to fill the window at its current size and scroll position.

我不开心有关通知到UI。它将与高频产生。这些可能被排队或节流;时延应不影响第一个ID,但在处理最后一个ID的延迟可能导致极端情况的问题,如观看全缓冲的结尾处的除非的用户界面使得它显示邮件的副本,这我想避免的。

I'm not happy about the notification into the UI. It would be generated with high frequency. These could be queued or otherwise throttled; latency should not affect the first id but delays in handling the last id could cause problems in corner cases such as viewing the very end of a full buffer unless the UI makes a copy of the messages it displays, which I would like to avoid.

这听起来是一个正确的方法呢?任何调整,可以使更多的美味?

Does this sound like the right approach? Any tweaks that could make it a bit more palatable?

推荐答案

(见Effo编辑下面,而这部分是德precated)的环形缓冲区是没有必要的,如果有一个队列线程和每个UI之间。

(See Effo EDIT below, and this part is deprecated) The ring buffer is not necessary if there's a queue between the thread and each UI.

当消息到达时,线程弹出它,并将它推到UI的相应队列中。

When message arrived, the thread pop it and push it to a UI's queue accordingly.

此外每个UI.Q可以原子操作了。有没有需要的互斥。另一个好处是,每个消息只被复制两次:一种是低级队列,花药是到显示器,因为存储信息到别处是没有必要的(只是分配从低电平队列的指针UI.Q应该足够如果C / C ++)。

Furthermore each UI.Q could be operated atomically too. There's no mutex needed. Another benefit is that each message only had been copied twice: one is to low level queue, anther is to the display, because storing the message to elsewhere is not necessary (just assign a pointer from low level queue to UI.Q should be enough if C/C++).

到目前为止,唯一担心的是,可能一个UI.Q的长度不运行时间足够短信时,车流量大。每这个问题,你可以使用一个动态长度队列或让UI本身存储溢出的消息为POSIX内存映射文件。如果使用POSIX映射,即使您正在使用的文件,需要做额外的消息拷贝效率高。但无论如何,它仅仅是异常处理。队列可以设置为一个适当的大小,因此,通常你会得到出色的表演。的一点是,当UI需要溢出消息存储到映射文件,它应该执行高度并行操作过,以便它不会影响该低级队列。

So far the only concern is that might the length of a UI.Q is not run-time enough when messaging traffic is heavy. Per this question, you can either use a dynamic-length queue or let the UI itself stores the overflowed message to a posix memory-mapped file. High efficiency if using posix mapping, even though you are using a file and need to do extra message copying. But anyway it is only the exception handling. Queue could be set to a proper size so that normally you'll get excellent performances. The point is that when the UI need to store overflowed message to a mapped file, it should perform highly-concurrent operation too so that it will not affect the low level queue.

我preFER动态大小的队列建议。看来我们有很多对现代PC的内存。

I prefer to dynamic-size queue proposal. It seems we have lots of memory on modern PCs.

查看文档EffoNetMsg.pdf在的http:// code .google.com / p / effonetmsg /下载/ 列表,以了解更多关于无锁的,排队的设施和高并发编程模型。

see the document EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list to learn more about lock-free, queue facilities and Highly-concurrent Programming Models.


Effo编辑@ 2009oct23:显示一个分级模型支持随机访问的消息消息观众滚动

Effo EDIT@2009oct23: Show a Staged Model which support random message accessing for scrolling of message viewers.

                         +---------------+ 
                     +---> Ring Buffer-1 <---+
                     |   +---------------+   |
                  +--+                       +-----+
                  |  |   +---------------+   |     |
                  |  +---> Ring Buffer-2 <---+     |
                  |      +---------------+         |
                  |                                |
          +-------+-------+            +-----------+----------+
          |   Push Msg &  |            |   GetHeadTail()      |
          |  Send AckReq  |            |  & Send UpdateReq    |
          +---------------+            +----------------------+
          |App.MsgStage() |            |   App.DisPlayStage() |
          +-------+-------+            +-----------+----------+
                  | Pop()                          | Pop()         
 ^              +-V-+                            +-V-+ 
 | Events       | Q |    Msg Stage |             | Q |  Display Stage
 | Go Up        | 0 |   Logic-Half |             | 1 |   Logic-Half      
-+------------- |   | -------------+------------ |   | ---------------
 | Requests     |   |    I/O-Half  |             |   |    I/O-Half
 | Move Down    +-^-+              |             +-^-+   
 V                | Push()                         |     
   +--------------+-------------+                  |
   |   Push OnRecv Event,       |          +-------+-------+
   | 1 Event per message        |          |               | Push()
   |                            |   +------+------+ +------+------+
   |  Epoll I/O thread for      |   |Push OnTimer | |Push OnTimer |
   |multi-messaging connections |   |  Event/UI-1 | |  Event/UI-2 |
   +------^-------^--------^----+   +------+------+ +------+------+
          |       |        |               |               |                   
Incoming msg1    msg2     msg3        Msg Viewer-1    Msg Viewer-2

要点:

1,您了解不同高度并行模式,具体如上图,一个分级模型所示;这样你就知道为什么它跑得快。

1 You understand different Highly-concurrent Models, specific shown in above figure, a Staged Model; so that you'll know why it runs fast.

2两种I / O,一个是消息或者是Epoll线程,如果C / C ++和GNU​​ Linux的2.6倍;另一个是显示,如绘图屏幕或印刷的文字,等等。 2种I / O的处理为2个阶段相应。请注意,如果赢/ MSVC,使用完成端口而不是是Epoll。

2 Two kinds of I/O, one is Messaging or Epoll Thread if C/C++ and GNU Linux 2.6x; another is Displaying such as drawing screen or printing text, and so on. the 2 kinds of I/O are processed as 2 Stages accordingly. Note if Win/MSVC, use Completion Port instead of Epoll.

3还是2消息copyings。一)推OnRecv生成消息(CMSG * PMSG = CreateMsg(MSG)如果C / C ++); B)UI读取并从它的环形缓冲区复制相应的消息,只需要复制更新消息部分,而不是整个缓冲区。注队列和环缓冲区只存储一个消息句柄(queue.push(PMSG)或RingBuff.push(PMSG)如果C / C ++),以及任何时效的信息将被删除(pMsg->销毁()如果C / C ++)。在一般的MsgStage()将其推入环形缓冲区之前重建消息头。

3 Still 2 message-copyings as mentioned before. a) Push-OnRecv generates the message ("CMsg *pMsg = CreateMsg(msg)" if C/C++); b) UI read and copy message from it's ring buffer accordingly, and only need to copy updated message parts, not the whole buffer. Note queues and ring buffers are only store a message handle ("queue.push(pMsg)" or "RingBuff.push(pMsg)" if C/C++), and any aged-out message will be deleted ("pMsg->Destroy()" if C/C++). In general the MsgStage() would rebuild the Msg Header before push it into the ring buffer.

4的OnTimer事件后,用户界面​​将收到包含环BUFF的新头/尾指标上层更新。因此UI可以相应地更新显示。希望UI有一个本地缓存味精,所以不需要复制整个环形缓冲区,而只是更新。见上文第3点。如果需要执行环缓冲区随机访问,你可以只让UI生成OnScroll事件。实际上,如果用户界面具有一个本地缓冲,OnScroll可能不是必要的。无论如何,你可以做到这一点。注意的UI来决定是丢弃的时效的信息或没有,说产生OnAgedOut事件,从而使环路缓冲器可以正确地和安全地操作。

4 After an OnTimer event, the UI will receive update from upper layer which contains new Head/Tail indicators of the ring buff. so UI could update the display accordingly. Hope UI has a local msg buffer, so don't need to copy the whole ring buffer, but just update. see point 3 above. If need to perform random-accessing on ring buffer, you could just let UI generate OnScroll event. actually if UI has a local buffer, OnScroll might be not necessary. anyway, you can do it. Note UI will determine wheter to discard an aged-out message or not, say generate OnAgedOut event, so that the ring buffers could be operated correctly and safely.

5没错,计时器触发或OnRecv是事件名称,和的OnTimer(){}或OnRecv(){}将在DisplayStage()或MsgStage()来执行。此外,活动去向上,请求去下游,这可能是从你虽然已经或之前看到的不同。

5 Exactly, OnTimer or OnRecv is the Event name, and OnTimer(){} or OnRecv(){} would be executed in DisplayStage() or MsgStage(). Again, Events go upwards and Requests go downstream, and this might be different from that what you had though or seen before.

6 Q0和2环状缓冲存储器可被实现为无锁设施,以改善性能,因为单生产者和单个消费者;无锁/互斥体需要。而Q1是财产以后不同。但是,我相信你是能够使它单个生产者和消费者单也通过稍微改变以上设计图,例如加上Q2所以每一个UI有一个队列,DisplayStage()可能只是投票Q1和Q2正确处理所有事件。注Q0和Q1是事件队列,请求的队列中没有如上图所示。

6 Q0 and 2 ring buffers could be implemented as lock-free facilities to improve performances, since single producer and single consumer; no lock/mutex needed. while Q1 is somthing different. But I believe you are able to make it single producer and single consumer too by changing the above design figure slightly, e.g. add Q2 so every UI has a queue, and DisplayStage() could just polling Q1 and Q2 to process all events correctly. Note Q0 and Q1 are Event-Queue, the Request-Queues are not shown in above figure.

7 MsgStage()和DisplayStage()是在一个单一的StagedModel.Stage()顺序,说主线程。 epoll的I / O或消息是另一个线程,线程MsgIO,每一个UI都有一个I / O线程,说显示线程。所以在上面的图,还有就是4线程总计它们同时运行。 Effo已经测试了只有一个线程MsgIO应该够消息客户端的多liseners加上成千上万。

7 MsgStage() and DisplayStage() are in a single StagedModel.Stage() sequentially, say the Main Thread. Epoll I/O or Messaging is another thread, the MsgIO Thread, and every UI has an I/O thread, say Display Thread. so in above figure, there're 4 threads in total which are running concurrently. Effo had tested that just one MsgIO Thread should be enough for multi-liseners plus thousands of messaging clients.

再次看到 HTTP文件EffoNetMsg.pdf:// code.google.com / p / effonetmsg /下载/时的或EffoAddons.pdf REL =nofollow的> HTTP://$c$c.google.com/p/effoaddon/downloads/list 以了解更多关于高并发编程模型和网络信息传递;看到EffoDesign_LockFree.pdf在 HTTP://$c$c.google.com/ p / effocore /下载/列表了解更多关于无锁设施,如无锁队列和无锁环形缓冲区。

Again, see the document EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list or EffoAddons.pdf at http://code.google.com/p/effoaddon/downloads/list to learn more about Highly-concurrent Programming Models and network messaging; see EffoDesign_LockFree.pdf at http://code.google.com/p/effocore/downloads/list to learn more about lock-free facilities such as lock-free queue and lock-free ring buffer.

这篇关于在循环缓冲器中实时观看数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆