Java中的内存屏障行为 [英] Behavior of memory barrier in Java

查看:314
本文介绍了Java中的内存屏障行为的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在阅读了更多博客/文章等之后,我现在对于在内存屏障之前/之后加载/存储的行为感到非常困惑.

After reading more blogs/articles etc, I am now really confused about the behavior of load/store before/after memory barrier.

以下是道格·利阿(Doug Lea)在他有关JMM的澄清文章中的两句话,它们都很简洁:

Following are 2 quotes from Doug Lea in one of his clarification article about JMM, which are both very straighforward:

  1. 线程A写入易失性字段f时可见的所有内容在线程B读取f时对线程B可见.
  2. 请注意,两个线程访问相同的volatile变量很重要,以便正确设置事前发生的关系.并非所有情况下,线程A写入易失性字段f时对线程A可见的所有内容在读取易失性字段g之后对线程B可见.
  1. Anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.
  2. Note that it is important for both threads to access the same volatile variable in order to properly set up the happens-before relationship. It is not the case that everything visible to thread A when it writes volatile field f becomes visible to thread B after it reads volatile field g.

但是当我查看另一个有关内存屏障的博客时,我得到了这些:

But then when I looked into another blog about memory barrier, I got these:

  1. 存储屏障,即x86上的防护"指令,强制所有在屏障之前的存储指令都在屏障之前发生,并刷新存储缓冲区以缓存为其发出CPU的缓存.
  2. 一个加载屏障,x86上的"lfence"指令,会在屏障之后强制执行所有屏障之后的加载指令,然后等待加载缓冲区耗尽该CPU.
  1. A store barrier, "sfence" instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued.
  2. A load barrier, "lfence" instruction on x86, forces all load instructions after the barrier to happen after the barrier and then wait on the load buffer to drain for that CPU.

对我来说,Doug Lea的说明比另一个说明更严格:基本上,这意味着如果负载屏障和存储屏障位于不同的监视器上,则不能保证数据的一致性.但是后一种方法意味着即使障碍位于不同的监视器上,也可以保证数据的一致性.我不确定我是否正确理解了这2个,也不确定其中哪个是正确的.

To me, Doug Lea's clarification is more strict than the other one: basically, it means if the load barrier and store barrier are on different monitors, the data consistency will not be guaranteed. But the later one means even if the barriers are on different monitors, the data consistency will be guaranteed. I am not sure if I understanding these 2 correctly and also I am not sure which of them is correct.

考虑以下代码:

  public class MemoryBarrier {
    volatile int i = 1, j = 2;
    int x;

    public void write() {
      x = 14; //W01
      i = 3;  //W02
    }

    public void read1() {
      if (i == 3) {  //R11
        if (x == 14) //R12
          System.out.println("Foo");
        else
          System.out.println("Bar");
      }
    }

    public void read2() {
      if (j == 2) {  //R21
        if (x == 14) //R22
          System.out.println("Foo");
        else
          System.out.println("Bar");
      }
    }
  }

假设我们有1个写线程TW1首先调用MemoryBarrier的write()方法,然后我们有2个读线程TR1和TR2分别调用MemoryBarrier的read1()和read2()方法.排序(对于这种情况,x86 DO会保留排序,但情况并非如此),根据内存模型,W01/W02之间将存在StoreStore屏障(假设为SB1),而R11/R12和R21/之间将存在2个LoadLoad屏障. R22(假设RB1和RB2).

Let's say we have 1 write thread TW1 first call the MemoryBarrier's write() method, then we have 2 reader threads TR1 and TR2 call MemoryBarrier's read1() and read2() method.Consider this program run on CPU which does not preserve ordering (x86 DO preserve ordering for such cases which is not the case), according to memory model, there will be a StoreStore barrier (let's say SB1) between W01/W02, as well as 2 LoadLoad barrier between R11/R12 and R21/R22 (let's say RB1 and RB2).

  1. 由于SB1和RB1位于同一监视器 i 上,所以调用 read1 的线程TR1应该始终在x上看到14,因此始终打印"Foo".
  2. li>
  3. SB1和RB2在不同的监视器上,如果Doug Lea是正确的,则不能保证线程TR2在x上看到14,这意味着"Bar"可能会偶尔打印出来.但是,如果内存屏障的运行方式如博客中所述,则存储屏障将推动所有数据到主内存和负载屏障将把所有数据从主内存拉到缓存/缓冲区,然后TR2也将保证在x上看到14.
  1. Since SB1 and RB1 are on same monitor i, so thread TR1 which calls read1 should always see 14 on x, also "Foo" is always printed.
  2. SB1 and RB2 are on different monitors, if Doug Lea is correct, thread TR2 will not be guaranteed to see 14 on x, which means "Bar" may be printed occasionally. But if memory barrier runs like Martin Thompson described in the blog, the Store barrier will push all data to main memory and Load barrier will pull all data from main memory to cache/buffer, then TR2 will also be guaranteed to see 14 on x.

我不确定哪一个是正确的,或者两者都正确,但是Martin Thompson描述的只是x86体系结构. JMM不保证TR2可以看到对x的更改,但是x86实现可以看到.

I am not sure which one is correct, or both of them are but what Martin Thompson described is just for x86 architecture. JMM does not guarantee change to x is visible to TR2 but x86 implementation does.

谢谢〜

推荐答案

Doug Lea是正确的.您可以在第17.4.4节中找到相关部分. Java语言规范:

Doug Lea is right. You can find the relevant part in section §17.4.4 of the Java Language Specification:

§17.4.4同步顺序

§17.4.4 Synchronization Order

[..]写入易失变量 v (第8.3.1.4节) v 的所有后续读取都与同步.任何线程(其中后续"根据同步顺序定义). [..]

[..] A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order). [..]

具体机器的内存模型并不重要,因为Java编程语言的语义是根据抽象机器定义的-与混凝土机器. Java运行时环境有责任以这种方式执行代码,从而使其符合 Java语言规范给出的保证.

The memory model of the concrete machine doesn't matter, because the semantics of the Java Programming Language are defined in terms of an abstract machine -- independent of the concrete machine. It's the responsibility of the Java runtime environment to execute the code in such a way, that it complies with the guarantees given by the Java Language Specification.

关于实际问题:

  • 如果没有进一步的同步,则方法read2可以打印"Bar",因为read2可以在write之前执行.
  • 如果要与CountDownLatch进行其他同步以确保在之后 write后执行read2,则方法read2将永远不会输出"Bar",因为同步与CountDownLatch一起使用会删除x上的数据竞争.
  • If there is no further synchronization, the method read2 can print "Bar", because read2 can be executed before write.
  • If there is an additional synchronization with a CountDownLatch to make sure that read2 is executed after write, then method read2 will never print "Bar", because the synchronization with CountDownLatch removes the data race on x.

独立的易失性变量:

有意义的是,对volatile变量的写入不与读取任何其他volatile变量同步吗?

Does it make sense, that a write to a volatile variable does not synchronize-with a read of any other volatile variable?

是的,这很有道理.如果两个线程需要相互交互,则通常必须使用相同的volatile变量来交换信息.另一方面,如果一个线程使用了volatile变量而不需要与所有其他线程进行交互,那么我们就不想为内存屏障付出代价.

Yes, it makes sense. If two threads need to interact with each other, they usually have to use the same volatile variable in order to exchange information. On the other hand, if a thread uses a volatile variable without a need for interacting with all other threads, we don't want to pay the cost for a memory barrier.

实际上在实践中很重要.让我们举个例子.下列类使用易失性成员变量:

It is actually important in practice. Let's make an example. The following class uses a volatile member variable:

class Int {
    public volatile int value;
    public Int(int value) { this.value = value; }
}

想象一下,此类仅在方法中本地使用. JIT编译器可以轻松地检测到该对象仅在此方法内使用(转义分析).

Imagine this class is used only locally within a method. The JIT compiler can easily detect, that the object is only used within this method (Escape analysis).

public int deepThought() {
    return new Int(42).value;
}

使用上述规则,JIT编译器可以删除volatile读取和写入的所有影响,因为volatile变量不能从任何其他线程访问.

With the above rule, the JIT compiler can remove all effects of the volatile reads and writes, because the volatile variable can not be accesses from any other thread.

这种优化实际上存在于Java JIT编译器中:

This optimization actually exists in the Java JIT compiler:

这篇关于Java中的内存屏障行为的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆