如果线程安全类必须在它的构造函数的最后一个内存屏障? [英] Should thread-safe class have a memory barrier at the end of its constructor?

查看:261
本文介绍了如果线程安全类必须在它的构造函数的最后一个内存屏障?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在实施旨在是线程安全的,我应该包括它的构造函数的最后一个内存屏障,以确保任何内部结构已经完成,可以访问之前,他们正在初始化类?抑或是消费者的责任,使实例提供给其他线程之前插入内存屏障



简体问题



是否有下面的代码,可能由于缺乏初始化和线程安全类的访问之间的内存屏障给出错误的行为竞争冒险?还是应该线程安全类本身抵御此



  ConcurrentQueue< INT>排队= NULL; 

Parallel.Invoke(
()= GT;队列=新ConcurrentQueue&所述; INT>(),
()=>排队.Enqueue(5));

请注意,这是可以接受的计划落空排队,如果第二个委托之前执行的,会发生第一。 (空,条件运算符可防御的NullReferenceException 在这里。)但是,它不应该是可以接受的程序抛出 IndexOutOfRangeException 的NullReferenceException ,排队 5 多。次,陷入死循环,或做任何造成内部结构种族危害其他奇怪的事情。



决定附件详尽的问题



具体而言,想象我正在实施一个队列一个简单的线程安全的包装。 (我知道,.NET已经提供 ConcurrentQueue< T> ,这仅仅是一个例子),我可以这样写:

 公共类ThreadSafeQueue< T> 
{
私人只读队列< T> _队列;

公共ThreadSafeQueue()
{
_queue =新队列< T>();

// Thread.MemoryBarrier(); //是否需要这一行?
}

公共无效排队(T项目)
{
锁(_queue)
{
_queue.Enqueue(项目);
}
}

公共BOOL TryDequeue(出牛逼的项目)
{
锁(_queue)
{
如果( _queue.Count == 0)
{
项=默认(T);
返回FALSE;
}

项= _queue.Dequeue();
返回真;
}
}
}

这实现是线程安全的一旦初始化。然而,如果初始化本身由另一个使用者线程赛跑,然后比赛危险可能出现,从而使后者的线程将之前访问实例的内部队列&所述; T> 已初始化。作为一个人为的例子:

  ThreadSafeQueue< INT>排队= NULL; 

的Parallel.For(0,10000,I =>
{
如果(我== 0)
队列=新ThreadSafeQueue< INT>();
,否则如果(ⅰ%2 == 0)
队列.Enqueue(ⅰ);
,否则
{
INT项= -1;
如果(?排队.TryDequeue(出项)==真)
Console.WriteLine(项目);
}
});



这是可以接受的上面的代码错过一些数字;但是,如果没有记忆障碍,它也可以得到一个的NullReferenceException (或其他一些怪异的结果),由于内部队列< T> 没有被一次初始化时排队 TryDequeue 被调用。



这是不是线程安全类的责任,包括它的构造函数的最后一个内存屏障,或者是消费者谁应包括类的实例化和它的可见性之间的内存屏障给其他线程?什么是.NET Framework中的约定标记为线程安全的。



修改:这是一个高级线程的话题,所以我了解一些评论的混乱。一个实例的可以的出现半生不熟的,如果从其他线程访问不恰当的同步。本主题双重检查锁定,这是ECMA CLI规范下破而不使用内存屏障的范围内广泛的讨论(如通过挥发性)。每乔恩斯基特



< BLOCKQUOTE>

Java内存模型并不能保证之前将新对象的引用被分配给实例。 Java内存模型进行了1.5版本改造,但毕竟这双重检查锁定仍然是断开的不volatile变量(在C#中)。



在没有任何内存屏障,它是在ECMA CLI规范碎了。这有可能是在.NET 2.0的内存模型(这是比ECMA规格更强)它是安全的,但我宁愿不依赖于这些强大的语义,尤其是如果有任何疑问,安全下。



解决方案

懒< T> 是线程安全的初始化一个非常不错的选择。我想这应该留给消费者提供了:

  VAR队列=新懒人< ThreadSafeQueue< INT>>( ()=>新建ThreadSafeQueue< INT>()); 

的Parallel.For(0,10000,I =>
{

,否则如果(I%2 == 0)
queue.Value .Enqueue(我);
,否则
{
INT项= -1;
如果(queue.Value.TryDequeue(出项)==真)
控制台.WriteLine(项目);
}
});


When implementing a class intended to be thread-safe, should I include a memory barrier at the end of its constructor, in order to ensure that any internal structures have completed being initialized before they can be accessed? Or is it the responsibility of the consumer to insert the memory barrier before making the instance available to other threads?

Simplified question:

Is there a race hazard in the code below that could give erroneous behaviour due to the lack of a memory barrier between the initialization and the access of the thread-safe class? Or should the thread-safe class itself protect against this?

ConcurrentQueue<int> queue = null;

Parallel.Invoke(
    () => queue = new ConcurrentQueue<int>(),
    () => queue?.Enqueue(5));

Note that it is acceptable for the program to enqueue nothing, as would happen if the second delegate executes before the first. (The null-conditional operator ?. protects against a NullReferenceException here.) However, it should not be acceptable for the program to throw an IndexOutOfRangeException, NullReferenceException, enqueue 5 multiple times, get stuck in an infinite loop, or do any of the other weird things caused by race hazards on internal structures.

Elaborated question:

Concretely, imagine that I were implementing a simple thread-safe wrapper for a queue. (I'm aware that .NET already provides ConcurrentQueue<T>; this is just an example.) I could write:

public class ThreadSafeQueue<T>
{
    private readonly Queue<T> _queue;

    public ThreadSafeQueue()
    {
        _queue = new Queue<T>();

        // Thread.MemoryBarrier(); // Is this line required?
    }

    public void Enqueue(T item)
    {
        lock (_queue)
        {
            _queue.Enqueue(item);
        }
    }

    public bool TryDequeue(out T item)
    {
        lock (_queue)
        {
            if (_queue.Count == 0)
            {
                item = default(T);
                return false;
            }

            item = _queue.Dequeue();
            return true;
        }
    }
}

This implementation is thread-safe, once initialized. However, if the initialization itself is raced by another consumer thread, then race hazards could arise, whereby the latter thread would access the instance before the internal Queue<T> has been initialized. As a contrived example:

ThreadSafeQueue<int> queue = null;

Parallel.For(0, 10000, i =>
{
    if (i == 0)
        queue = new ThreadSafeQueue<int>();
    else if (i % 2 == 0)
        queue?.Enqueue(i);
    else
    {
        int item = -1;
        if (queue?.TryDequeue(out item) == true)
            Console.WriteLine(item);
    }
});

It is acceptable for the code above to miss some numbers; however, without the memory barrier, it could also be getting a NullReferenceException (or some other weird result) due to the internal Queue<T> not having been initialized by the time that Enqueue or TryDequeue are called.

Is it the responsibility of the thread-safe class to include a memory barrier at the end of its constructor, or is it the consumer who should include a memory barrier between the class's instantiation and its visibility to other threads? What is the convention in the .NET Framework for classes marked as thread-safe?

Edit: This is an advanced threading topic, so I understand the confusion in some of the comments. An instance can appear as half-baked if accessed from other threads without proper synchronization. This topic is discussed extensively within the context of double-checked locking, which is broken under the ECMA CLI specification without the use of memory barriers (such as through volatile). Per Jon Skeet:

The Java memory model doesn't ensure that the constructor completes before the reference to the new object is assigned to instance. The Java memory model underwent a reworking for version 1.5, but double-check locking is still broken after this without a volatile variable (as in C#).

Without any memory barriers, it's broken in the ECMA CLI specification too. It's possible that under the .NET 2.0 memory model (which is stronger than the ECMA spec) it's safe, but I'd rather not rely on those stronger semantics, especially if there's any doubt as to the safety.

解决方案

Lazy<T> is a very good choice for Thread-Safe Initialization. I think it should be left to the consumer to provide that:

var queue = new Lazy<ThreadSafeQueue<int>>(() => new ThreadSafeQueue<int>());

Parallel.For(0, 10000, i =>
{

    else if (i % 2 == 0)
        queue.Value.Enqueue(i);
    else
    {
        int item = -1;
        if (queue.Value.TryDequeue(out item) == true)
            Console.WriteLine(item);
    }
});

这篇关于如果线程安全类必须在它的构造函数的最后一个内存屏障?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆