AtomicReferenceArray的工作原理 [英] Workings of AtomicReferenceArray

查看:90
本文介绍了AtomicReferenceArray的工作原理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否可以使用AtomicReferenceArray替代ConcurrentLinkedQueue(如果可以使用有界结构)。

I am wondering if AtomicReferenceArray can be used as a replacement for ConcurrentLinkedQueue (if one could live with a bounded structure).

我目前有类似的东西:

I currently have something like:

ConcurrentLinkedQueue<Object[]> queue = new ConcurrentLinkedQueue<Object[]>();

public void store(Price price, Instrument instrument, Object[] formats){
     Object[] elements = {price, instrument, formats};
     queue.offer( elements);
}

store(..)由多个线程调用。

The store(..) is called by multiple threads.

我还有一个消费者线程,该线程会定期唤醒并消耗元素。

I also have a consumer thread, which periodically wakes up and consumes the elements.

private class Consumer implements Runnable{

@Override
public void run(){

List<Object[]> holder = drain( queue );
for(Object[] elements : holder ){
   for( Object e : elements ){
      //process ...
   }
}

private final List<Object[]> drain(){
//...
}

}

我可以换掉ConcurrentLinkedQueue以便使用AtomicReferenceArray并仍保持线程安全性吗?

Can I swap out ConcurrentLinkedQueue in favor of AtomicReferenceArray and still maintain thread safety aspect?

具体来说,原子存储元素并建立先发生关系,以便使用者线程可以看到不同线程存储的所有元素吗?

Specifically, atomically storing the elements and establishing a "happens before" relationship so the consumer thread sees all the elements stored by different threads?

我尝试阅读AtomicReferenceArray的源代码,但仍然不确定。

I tried reading the source code for AtomicReferenceArray but still not absolutely sure.

欢呼声

推荐答案

AtomicReferenceArray 可以用作无锁的单个使用者/多个生产者环形缓冲区。我是实验并于几个月前实现,并具有可运行的原型。这样做的好处是减少了垃圾的创建,更好的缓存局部性,并且由于更简单而没有满时的性能也更好。缺点是缺乏严格的fifo语义,并且当缓冲区已满时性能很差,因为生产者必须等待流失发生。可以通过回退到 ConcurrentLinkedQueue 来避免停顿来缓解这种情况。

An AtomicReferenceArray can be used as a lock-free single consumer / multiple producer ring buffer. I was experimenting with an implementation a few months ago and have a working prototype. The advantage is a reduction in garbage creation, better cache locality, and better performance when not full due to being simpler. The disadvantages are a lack of strict fifo semantics and poor performance when the buffer is full as a producer must wait for a drain to occur. This might be mitigated by falling back to a ConcurrentLinkedQueue to avoid stalls.

必须观察到发生在边缘之前的情况制作人的身份,以便他们获得唯一的广告位。但是,由于只需要一个用户,因此可以将其延迟到排水完全完成为止。在我的用法中,耗费在线程之间摊销,因此通过成功获取尝试锁来选择使用者。该锁的释放提供了优势,使阵列更新可以在关键部分使用惰性集。

The happens-before edge must be seen by producers so that they acquire a unique slot. However as only a single consumer is required, this can be delayed until the draining is complete. In my usage the drain is amortized across threads, so the consumer is chosen by the successful acquisition of a try-lock. The release of that lock provides the edge, allowing the array updates to use lazy sets within the critical section.

我只会在性能非常高的特殊情况下使用此方法。调优。在我的用法中,将其作为缓存的内部实现细节是有意义的。我一般不会使用它。

I would only use this approach in specialized scenarios where performance is highly tuned. In my usage it makes sense as an internal implementation detail for a cache. I wouldn't use it in general, though.

这篇关于AtomicReferenceArray的工作原理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆