实时查看循环缓冲区中的数据 [英] Viewing data in a circular buffer in real-time

查看:160
本文介绍了实时查看循环缓冲区中的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



这是我目前的想法:

我有一个传入的邮件流,并且想要一个允许用户滚动邮件的窗口。


  • 传入的邮件进入单个生产者单个消费队列

  • 线程读取它们并将它们放入循环缓冲区这样,我可以将多个传入流安全地放置在循环缓冲区中,并将输入解耦

  • Mutex以协调循环缓冲区UI和线程之间的访问

  • 当线程更改时,从线程到UI的两个通知,第一个ID和缓冲区中的最后一个ID。

  • 这允许UI了解它可以显示什么,它需要访问的循环缓冲区的哪些部分,删除覆盖的消息。它只能访问以当前大小和滚动位置填充窗口所需的消息。



我对通知不满意UI。它将以高频率产生。这些可以排队或以其他方式节流;延迟不应影响第一个ID,但是处理最后一个ID的延迟可能会导致在角落情况下的问题,例如查看完整缓冲区的结束,除非 UI显示其显示的消息的副本我想避免。



这听起来像是正确的方法吗?

解决方案

(参见下面的Effo EDIT, )如果线程和每个UI之间有一个队列,则不需要环形缓冲区。



当消息到达时,线程弹出并推送到UI的队列相应。



此外,每个UI.Q也可以原子操作。没有互斥体。另一个好处是每个消息只被复制了两次:一个是低级队列,另一个是显示,因为将消息存储到其他地方是不必要的(只需从低级队列分配一个指针到UI.Q应该足够了如果C / C ++)。



到目前为止唯一关注的是,当消息流量很重时,UI.Q的长度可能不足以运行。根据这个问题,你可以使用动态长度队列,或者让UI本身将溢出的消息存储到posix内存映射文件。如果使用posix映射,即使您使用文件并需要执行额外的消息复制,效率也会很高。但无论如何它只是异常处理。队列可以设置为适当的大小,所以通常你会得到优秀的表现。重点是当UI需要将溢出的消息存储到映射文件时,它应该执行高并发操作,以便不会影响低级队列。



我喜欢动态大小队列建议。看起来我们在现代电脑上有大量的记忆。



请参阅文件EffoNetMsg.pdf,网址为 http://code.google.com/p/effonetmsg/downloads/list ,了解有关无锁,队列设备和高并发编程模型的更多信息。






Effo EDIT @ 2009oct23:显示支持随机邮件存取的消息查看者。

  + --------------- + 
+ ---> Ring Buffer-1< --- +
| + --------------- + |
+ - + + ----- +
| | + --------------- + | |
| + ---> Ring Buffer-2 <--- + |
| + --------------- + |
| |
+ ------- + ------- + + ----------- + ---------- +
|推送消息| | GetHeadTail()|
|发送AckReq | | &发送UpdateReq |
+ --------------- + + ---------------------- +
| App.MsgStage()| | App.DisPlayStage()|
+ ------- + ------- + + ----------- + ---------- +
| Pop()| Pop()
^ + -V- + + -V- +
|活动| Q |消息阶段| | Q |显示阶段
|上传| 0 |逻辑一半| | 1 |逻辑一半
- + ------------- | | ------------- + ------------ | | ---------------
|请求| | I / O-Half | | | I / O-Half
|向下移动+ - ^ - + | + - ^ - +
V | Push()|
+ -------------- + ------------- + |
|推送OnRecv事件,| + ------- + ------- +
|每个消息1个事件| | Push()
| | + ------ + ------ + + ------ + ------ +
| Epoll I / O线程| | Push OnTimer | | Push OnTimer |
|多消息连接| | Event / UI-1 | | Event / UI-2 |
+ ------ ^ ------- ^ -------- ^ ---- + + ------ + ------ + + ------ + ------ +
| | | | |
Incoming msg1 msg2 msg3 Msg Viewer-1 Msg Viewer-2

p>

1您了解不同的高并发模型,具体如上图所示,分阶段模型;所以你会知道为什么它运行快。



2两种I / O,一个是Messaging或Epoll Thread如果C / C ++和GNU Linux 2.6x;另一种是显示诸如绘图屏幕或打印文本等。将2种I / O作为2级处理。注意如果Win / MSVC,使用完成端口而不是Epoll。



3仍然2消息复制如前所述。 a)Push-OnRecv生成消息(CMsg * pMsg = CreateMsg(msg)if C / C ++); b)UI从它的环形缓冲器相应地读取和复制消息,并且仅需要复制更新的消息部分,而不是整个缓冲器。注意队列和环形缓冲区只存储一个消息句柄(queue.push(pMsg)或RingBuff.push(pMsg)如果C / C ++),任何老化的消息将被删除(pMsg-> Destroy ()如果C / C ++)。一般来说,MsgStage()在将它推入环形缓冲区之前将重建Msg头。



4在OnTimer事件之后,UI将从上层接收更新新的头/指示器的环形buff。因此UI可以相应地更新显示。希望UI有一个本地msg缓冲区,因此不需要复制整个环形缓冲区,而只是更新。见上面第3点。如果需要对环形缓冲区执行随机访问,可以让UI生成OnScroll事件。实际上如果UI有一个本地缓冲区,OnScroll可能不是必要的。反正,你可以做到。注意UI将确定抛弃一个老化消息,例如生成OnAgedOut事件,以便环形缓冲区可以正确和安全地操作。



5确切地说, OnTimer或OnRecv是事件名称,OnTimer(){}或OnRecv(){}将在DisplayStage()或MsgStage()中执行。再次,事件向上,请求向下游,这可能不同于你以前或之前看到的。



6 Q0和2环形缓冲区可以实现为无锁设施提高性能,自单生产者和单消费者;无需锁/互斥。而Q1是不同的。但我相信你能够通过稍微改变上面的设计数字使它的单一生产者和单个消费者,例如。添加Q2使每个UI都有一个队列,DisplayStage()可以只轮询Q1和Q2来正确处理所有事件。注意Q0和Q1是事件队列,请求队列不在上图中显示。



7 MsgStage()和DisplayStage()依次位于单个StagedModel.Stage()中,例如主线程。 Epoll I / O或Messaging是另一个线程,MsgIO Thread,每个UI都有一个I / O线程,比如显示线程。所以在上图中,共有4个线程并发运行。 Effo测试了只有一个MsgIO Thread应该足够多个liseners和上千个消息客户端。



再次,请参阅文档EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list 或EffoAddons.pdf http://code.google.com/p/effoaddon/downloads/list 详细了解高并发编程模型和网络消息;请访问 http://code.google.com/p/effocore/downloads/查看EffoDesign_LockFree.pdf列表以了解有关无锁设施的更多信息,例如无锁队列和无锁环形缓冲区。


I have an incoming stream of messages, and want a window that allows the user to scroll through the messages.

This is my current thinking:

  • Incoming messages go into single producer single consumer queue
  • A thread reads them out and places them into a circular buffer with a sequential id
  • This way I could have multiple incoming streams safely placed in the circular buffer and it decouples the input
  • Mutex to coordinate circular buffer access between the UI and the thread
  • Two notifications from thread to UI one for the first id and one for the last id in the buffer when ever either changes.
  • This allows the UI to figure out what it can display, which parts of the circular buffer it needs access, delete overwritten messages. It only accesses the messages required to fill the window at its current size and scroll position.

I'm not happy about the notification into the UI. It would be generated with high frequency. These could be queued or otherwise throttled; latency should not affect the first id but delays in handling the last id could cause problems in corner cases such as viewing the very end of a full buffer unless the UI makes a copy of the messages it displays, which I would like to avoid.

Does this sound like the right approach? Any tweaks that could make it a bit more palatable?

解决方案

(See Effo EDIT below, and this part is deprecated) The ring buffer is not necessary if there's a queue between the thread and each UI.

When message arrived, the thread pop it and push it to a UI's queue accordingly.

Furthermore each UI.Q could be operated atomically too. There's no mutex needed. Another benefit is that each message only had been copied twice: one is to low level queue, anther is to the display, because storing the message to elsewhere is not necessary (just assign a pointer from low level queue to UI.Q should be enough if C/C++).

So far the only concern is that might the length of a UI.Q is not run-time enough when messaging traffic is heavy. Per this question, you can either use a dynamic-length queue or let the UI itself stores the overflowed message to a posix memory-mapped file. High efficiency if using posix mapping, even though you are using a file and need to do extra message copying. But anyway it is only the exception handling. Queue could be set to a proper size so that normally you'll get excellent performances. The point is that when the UI need to store overflowed message to a mapped file, it should perform highly-concurrent operation too so that it will not affect the low level queue.

I prefer to dynamic-size queue proposal. It seems we have lots of memory on modern PCs.

see the document EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list to learn more about lock-free, queue facilities and Highly-concurrent Programming Models.


Effo EDIT@2009oct23: Show a Staged Model which support random message accessing for scrolling of message viewers.

                         +---------------+ 
                     +---> Ring Buffer-1 <---+
                     |   +---------------+   |
                  +--+                       +-----+
                  |  |   +---------------+   |     |
                  |  +---> Ring Buffer-2 <---+     |
                  |      +---------------+         |
                  |                                |
          +-------+-------+            +-----------+----------+
          |   Push Msg &  |            |   GetHeadTail()      |
          |  Send AckReq  |            |  & Send UpdateReq    |
          +---------------+            +----------------------+
          |App.MsgStage() |            |   App.DisPlayStage() |
          +-------+-------+            +-----------+----------+
                  | Pop()                          | Pop()         
 ^              +-V-+                            +-V-+ 
 | Events       | Q |    Msg Stage |             | Q |  Display Stage
 | Go Up        | 0 |   Logic-Half |             | 1 |   Logic-Half      
-+------------- |   | -------------+------------ |   | ---------------
 | Requests     |   |    I/O-Half  |             |   |    I/O-Half
 | Move Down    +-^-+              |             +-^-+   
 V                | Push()                         |     
   +--------------+-------------+                  |
   |   Push OnRecv Event,       |          +-------+-------+
   | 1 Event per message        |          |               | Push()
   |                            |   +------+------+ +------+------+
   |  Epoll I/O thread for      |   |Push OnTimer | |Push OnTimer |
   |multi-messaging connections |   |  Event/UI-1 | |  Event/UI-2 |
   +------^-------^--------^----+   +------+------+ +------+------+
          |       |        |               |               |                   
Incoming msg1    msg2     msg3        Msg Viewer-1    Msg Viewer-2

The Points:

1 You understand different Highly-concurrent Models, specific shown in above figure, a Staged Model; so that you'll know why it runs fast.

2 Two kinds of I/O, one is Messaging or Epoll Thread if C/C++ and GNU Linux 2.6x; another is Displaying such as drawing screen or printing text, and so on. the 2 kinds of I/O are processed as 2 Stages accordingly. Note if Win/MSVC, use Completion Port instead of Epoll.

3 Still 2 message-copyings as mentioned before. a) Push-OnRecv generates the message ("CMsg *pMsg = CreateMsg(msg)" if C/C++); b) UI read and copy message from it's ring buffer accordingly, and only need to copy updated message parts, not the whole buffer. Note queues and ring buffers are only store a message handle ("queue.push(pMsg)" or "RingBuff.push(pMsg)" if C/C++), and any aged-out message will be deleted ("pMsg->Destroy()" if C/C++). In general the MsgStage() would rebuild the Msg Header before push it into the ring buffer.

4 After an OnTimer event, the UI will receive update from upper layer which contains new Head/Tail indicators of the ring buff. so UI could update the display accordingly. Hope UI has a local msg buffer, so don't need to copy the whole ring buffer, but just update. see point 3 above. If need to perform random-accessing on ring buffer, you could just let UI generate OnScroll event. actually if UI has a local buffer, OnScroll might be not necessary. anyway, you can do it. Note UI will determine wheter to discard an aged-out message or not, say generate OnAgedOut event, so that the ring buffers could be operated correctly and safely.

5 Exactly, OnTimer or OnRecv is the Event name, and OnTimer(){} or OnRecv(){} would be executed in DisplayStage() or MsgStage(). Again, Events go upwards and Requests go downstream, and this might be different from that what you had though or seen before.

6 Q0 and 2 ring buffers could be implemented as lock-free facilities to improve performances, since single producer and single consumer; no lock/mutex needed. while Q1 is somthing different. But I believe you are able to make it single producer and single consumer too by changing the above design figure slightly, e.g. add Q2 so every UI has a queue, and DisplayStage() could just polling Q1 and Q2 to process all events correctly. Note Q0 and Q1 are Event-Queue, the Request-Queues are not shown in above figure.

7 MsgStage() and DisplayStage() are in a single StagedModel.Stage() sequentially, say the Main Thread. Epoll I/O or Messaging is another thread, the MsgIO Thread, and every UI has an I/O thread, say Display Thread. so in above figure, there're 4 threads in total which are running concurrently. Effo had tested that just one MsgIO Thread should be enough for multi-liseners plus thousands of messaging clients.

Again, see the document EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list or EffoAddons.pdf at http://code.google.com/p/effoaddon/downloads/list to learn more about Highly-concurrent Programming Models and network messaging; see EffoDesign_LockFree.pdf at http://code.google.com/p/effocore/downloads/list to learn more about lock-free facilities such as lock-free queue and lock-free ring buffer.

这篇关于实时查看循环缓冲区中的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆