由两个线程访问的共享队列的关键部分代码是什么? [英] What will be the critical section code for a shared queue accessed by two threads?

查看:113
本文介绍了由两个线程访问的共享队列的关键部分代码是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我们有一个共享队列(使用数组实现),这两个线程可以访问,一个用于从中读取数据,另一个用于向其写入数据。现在,我有一个同步的问题。我使用Win32 API(EnterCriticalSection等)实现这一点。



但我的好奇心是什么将是队列的入队和出队操作的关键部分代码?



只是因为两个线程正在使用共享资源?为什么我不能看到任何问题是这样的:前端和后端都保持,所以,当ReaderThread读取,它可以从前端读取,当WriterThread写入,它可以轻松写入后端。

$ b

解决方案

对于单个生产者/消费者循环队列实现,锁实际上不是必需的。只需设置一个条件,如果队列已满,生产者无法写入队列,如果队列为空,则消费者无法从队列中读取。此外,生产者将总是写入指向队列中第一个可用空槽的 tail 指针,消费者将从头读取



你的代码可以看起来像下面的代码示例(注意:我假设一个初始化的队列 tail == head ,并且两个指针都声明为 volatile ,这样优化编译器不会在x86上,由于架构的强大的内存一致性模型,因此不需要内存屏障,但是在具有较弱内存一致性模型的其他架构上,这将会发生变化,其中需要内存屏障) :

  queue_type :: pointer queue_type :: next_slot(queue_type :: pointer ptr); 

bool queue_type :: enqueue(const my_type& obj)
{
if(next_slot(tail)== head)
return false;

* tail = obj;
tail = next_slot(tail);

return true;
}

bool queue_type :: dequeue(my_type& obj)
{
if(head == tail)
return false;

obj = * head;
head = next_slot(head);

return true;
}

函数 next_slot 只需增加 head tail 指针,以便它返回一个指向数组中下一个槽的指针,任何数组环绕功能。



最后,我们保证在单个生产者/消费者模型中的同步,因为我们不增加 tail 已将数据写入其指向的插槽,并且我们不增加指针,直到我们从指向的插槽读取数据。因此,对 dequeue 的调用将不会返回有效,直到至少对 enequeue 进行一次调用,并且 tail 指针不会覆盖指针,因为检查 enqueue 。此外,只有一个线程增加 tail 指针,一个线程增加指针,因此没有共享读取或写入相同指针的问题,这将导致需要锁定或某种类型的原子操作的同步问题。


Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).

But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?

Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.

What potential problems can occur?

解决方案

For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a tail pointer that is pointing to the first available empty slot in the queue, and the consumer will read from a head pointer that represents the first unread slot in the queue.

You code can look like the following code example (note: I'm assuming in an initialized queue that tail == head, and that both pointers are declared volatile so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):

queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);

bool queue_type::enqueue(const my_type& obj)
{
    if (next_slot(tail) == head)
        return false;

    *tail = obj;
    tail = next_slot(tail);

    return true;
}

bool queue_type::dequeue(my_type& obj)
{
    if (head == tail)
        return false;

    obj = *head;
    head = next_slot(head);

    return true;
}

The function next_slot simply increments the head or tail pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.

Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the tail pointer until it has written the data into the slot it was pointing to, and we do not increment the head pointer until we have read the data from the slot it was pointing to. Therefore a call to dequeue will not return valid until at least one call to enequeue has been made, and the tail pointer will never over-write the head pointer because of the check in enqueue. Additionally, only one thread is incrementing the tail pointer, and one thread is incrementing the head pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.

这篇关于由两个线程访问的共享队列的关键部分代码是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆