在哪里放置栅栏/内存屏障,以保证新鲜的读/写承诺? [英] Where to places fences/memory barriers to guarantee a fresh read/committed writes?

查看:143
本文介绍了在哪里放置栅栏/内存屏障,以保证新鲜的读/写承诺?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

许多人一样,我一直挥发性迷惑读/写和围栏。所以,现在我想充分了解这些事情。



所以,挥发性读应该是(1)展品采集的语义和(2)保证,读值是新鲜的,也就是说,它不是一个缓存值。让我们关注(2)。



现在,我读过,如果要执行挥发性的读取,您应该引入读后获取围栏(或全栅栏),是这样的:

  INT本地=共享; 
Thread.MemoryBarrier();



究竟是如何这样做防止用户使用之前的缓存值读操作?
根据围栏的定义(没有读取/存储允许被上述移动/围栏下面),我将栅栏的的读取,防止读取从渡围栏和被移动的向后的时间(即,被缓存)。



如何防止读取被向前移动的时间(或后续从指令在时间上向后移动)保证挥发性(鲜)读?它是如何帮助?






同样,我认为,性写应引入后 >写操作,防止处理器在时间上向前移动写入(又名,延缓写入)。我相信这会使得处理器刷新写入到主存储器。



但我惊讶的是,的C#实施引入了围栏的的写!

  [MethodImplAttribute(MethodImplOptions.NoInlining)] //禁用优化
公共静态无效VolatileWrite(REF INT地址,int值)
{
内存屏障(); //调用内存屏障,以确保正确的语义在一个便携的方式。
地址=价值;
}



更新



根据这个例子,显然采取从C#4果壳中的篱笆2,放置的之后的写应该强制写立即被刷新到主内存,和围栏3,放置的的读取时,应该保证新鲜读:

 类Foo {
INT _answer;
布尔齐全;
无效A(){
_answer = 123;
Thread.MemoryBarrier(); //屏障1
_complete = TRUE;
Thread.MemoryBarrier(); //屏障2
}
无效B(){
Thread.MemoryBarrier(); //屏障3;
如果(_complete){
Thread.MemoryBarrier(); //屏障4;
Console.WriteLine(_answer);
}
}
}

在这本书中的观点(和我自己的个人信念)似乎矛盾的背后C#的想法 VolatileRead VolatileWrite 的实现。


解决方案

究竟怎样完成这个防止使用
先前缓存值读操作?




它确实没有这样的事情。挥发性读不保证最新的值将被退回。用简单的英语所有它的真正含义是,下一个读操作将返回一个新的价值,仅此而已。




如何防止读取被向前移动,在时间保证
A(或
在时间上向后移动,从随后的指令)挥发性(鲜)读?它是如何帮助?




注意这里的术语。挥发性不新鲜的代名词。正如我上面已经它的真正用处提到在于如何两个或更多的挥发性读取链接在一起。在动荡的顺序下一次读取比读取同一地址的前面读绝对会返回一个的的值。无锁码应与这个前提下考虑编写。也就是说,代码的结构应在处理一个的的价值的主要工作,而不是的最新的的值。这就是为什么大多数的无锁码在一个循环旋转,直到它可以验证操作完全成功。




在这本书中的观点(和我自己的个人信念)似乎
矛盾背后的C#VolatileRead和VolatileWrite
实现的想法。




不真。记住挥发性!=新鲜。是的,如果你想那么一个鲜读你需要把之前的读取某个acquire栅栏的。但是,这是不一样的操作的易失性读。我的意思是,如果 VolatileRead 的实施必须调用 Thread.MemoryBarrier 读指令那就不是真正产生的挥发性的读取。如果将产生的新鲜的读,但。


Like many other people, I've always been confused by volatile reads/writes and fences. So now I'm trying to fully understand what these do.

So, a volatile read is supposed to (1) exhibit acquire-semantics and (2) guarantee that the value read is fresh, i.e., it is not a cached value. Let's focus on (2).

Now, I've read that, if you want to perform a volatile read, you should introduce an acquire fence (or a full fence) after the read, like this:

int local = shared;
Thread.MemoryBarrier();

How exactly does this prevent the read operation from using a previously cached value? According to the definition of a fence (no read/stores are allowed to be moved above/below the fence), I would insert the fence before the read, preventing the read from crossing the fence and being moved backwards in time (aka, being cached).

How does preventing the read from being moved forwards in time (or subsequent instructions from being moved backwards in time) guarantee a volatile (fresh) read? How does it help?


Similarly, I believe that a volatile write should introduce a fence after the write operation, preventing the processor from moving the write forward in time (aka, delaying the write). I believe this would make the processor flush the write to the main memory.

But to my surprise, the C# implementation introduces the fence before the write!

[MethodImplAttribute(MethodImplOptions.NoInlining)] // disable optimizations
public static void VolatileWrite(ref int address, int value)
{
    MemoryBarrier(); // Call MemoryBarrier to ensure the proper semantic in a portable way.
    address = value;
}

Update

According to this example, apparently taken from "C# 4 in a Nutshell", fence 2 , placed after a write is supposed to force the write to be flushed to main memory immediately, and fence 3, placed before a read, is supposed to guarantee a fresh read:

class Foo{
  int _answer;
  bool complete;
  void A(){
    _answer = 123;
    Thread.MemoryBarrier(); // Barrier 1
    _complete = true;
    Thread.MemoryBarrier(); // Barrier 2
  }
  void B(){
    Thread.MemoryBarrier(); // Barrier 3;
    if(_complete){
      Thread.MemoryBarrier(); // Barrier 4;
      Console.WriteLine(_answer);
    }
  }
}

The ideas in this book (and my own personal beliefs) seem to contradict the ideas behind C#'s VolatileRead and VolatileWrite implementations.

解决方案

How exactly does this prevent the read operation from using a previously cached value?

It does no such thing. A volatile read does not guarantee that the latest value will be returned. In plain English all it really means is that the next read will return a newer value and nothing more.

How does preventing the read from being moved forwards in time (or subsequent instructions from being moved backwards in time) guarantee a volatile (fresh) read? How does it help?

Be careful with the terminology here. Volatile is not synonymous with fresh. As I already mentioned above its real usefulness lies in how two or more volatile reads are chained together. The next read in a sequence of volatile reads will absolutely return a newer value than the previous read of the same address. Lock-free code should be written with this premise in mind. That is, the code should be structured to work on the principal of dealing with a newer value and not the latest value. This is why most lock-free code spins in a loop until it can verify that the operation completely successfully.

The ideas in this book (and my own personal beliefs) seem to contradict the ideas behind C#'s VolatileRead and VolatileWrite implementations.

Not really. Remember volatile != fresh. Yes, if you want a "fresh" read then you need to place an acquire-fence before the read. But, that is not the same as doing a volatile read. What I am saying is that if the implementation of VolatileRead had the call to Thread.MemoryBarrier before the read instruction then it would not actually produce a volatile read. If would produce fresh a read though.

这篇关于在哪里放置栅栏/内存屏障,以保证新鲜的读/写承诺?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆