任何JVM的JIT编译器是否生成使用向量化浮点指令的代码? [英] Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?

查看:185
本文介绍了任何JVM的JIT编译器是否生成使用向量化浮点指令的代码?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我的Java程序的瓶颈确实是一些紧密的循环来计算一堆矢量点产品。是的,我已经介绍过了,是的,这是瓶颈,是的,这是重要的,是的,这就是算法是的,是的,我运行Proguard优化字节码等。

<这项工作本质上是点产品。在我有两个 float [50] ,我需要计算成对产品的总和。我知道处理器指令集存在执行这种类型的操作快速和大批量,如SSE或MMX。

是的我可以通过在JNI编写一些本地代码。 JNI调用结果相当昂贵。

我知道你不能保证JIT编译或不编译。有没有人听说过使用这些指令的JIT生成代码?如果是这样的话,有什么关于Java代码,可以帮助这种编译?



可能是否;值得一问。

解决方案

基本上,您希望自己的代码运行得更快。 JNI就是答案。我知道你说这不适合你,但让我告诉你,你错了。



这是 Dot.java

  import java.nio.FloatBuffer; 
import org.bytedeco.javacpp。*;
import org.bytedeco.javacpp.annotation。*;

@Platform(include =Dot.h,compiler =fastfpu)
public class Dot {
static {Loader.load(); }

static float [] a = new float [50],b = new float [50];
static float dot(){
float sum = 0;
for(int i = 0; i <50; i ++){
sum + = a [i] * b [i];
}
归还金额;

静态原生@MemberGetter FloatPointer ac();
静态原生@MemberGetter FloatPointer bc();
static native float dotc();

public static void main(String [] args){
FloatBuffer ab = ac()。capacity(50).asBuffer();
FloatBuffer bb = bc()。capacity(50).asBuffer(); (int i = 0; i <10000000; i ++){
a [i%50] = b [i%50] = dot()的


float sum = dotc();
ab.put(i%50,sum);
bb.put(i%50,sum);
}
long t1 = System.nanoTime();
for(int i = 0; i <10000000; i ++){
a [i%50] = b [i%50] = dot();
}
long t2 = System.nanoTime();
for(int i = 0; i <10000000; i ++){
float sum = dotc();
ab.put(i%50,sum);
bb.put(i%50,sum);
}
long t3 = System.nanoTime();
System.out.println(dot():+(t2 - t1)/ 10000000 +ns);
System.out.println(dotc():+(t3 - t2)/ 10000000 +ns);




$ Dot.h

  float ac [50],bc [50]; 

inline float dotc(){
float sum = 0;
for(int i = 0; i <50; i ++){
sum + = ac [i] * bc [i];
}
返还金额;



$ b我们可以用JavaCPP 使用以下命令行:

  $ javac -cp javacpp.jar Dot.java 
$ java -jar javacpp.jar Dot
$ java -cp javacpp.jar :.用英特尔酷睿i7-3632QM CPU @ 2.20GHz,Fedora 20,GCC 4.8.3,和OpenJDK 7或8,我得到这种类型的输出:
$ b $ $ p $ 点():37 ns
dotc() :23 ns

或者大概快1.6倍。我们需要使用直接的NIO缓冲区而不是阵列,但是 HotSpot可以像阵列一样快速访问直接的NIO缓冲区。另一方面,在这种情况下,手动展开循环并不能提供可衡量的性能提升。


Let's say the bottleneck of my Java program really is some tight loops to compute a bunch of vector dot products. Yes I've profiled, yes it's the bottleneck, yes it's significant, yes that's just how the algorithm is, yes I've run Proguard to optimize the byte code, etc.

The work is, essentially, dot products. As in, I have two float[50] and I need to compute the sum of pairwise products. I know processor instruction sets exist to perform these kind of operations quickly and in bulk, like SSE or MMX.

Yes I can probably access these by writing some native code in JNI. The JNI call turns out to be pretty expensive.

I know you can't guarantee what a JIT will compile or not compile. Has anyone ever heard of a JIT generating code that uses these instructions? and if so, is there anything about the Java code that helps make it compilable this way?

Probably a "no"; worth asking.

解决方案

So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.

Here's Dot.java:

import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;

@Platform(include="Dot.h", compiler="fastfpu")
public class Dot {
    static { Loader.load(); }

    static float[] a = new float[50], b = new float[50];
    static float dot() {
        float sum = 0;
        for (int i = 0; i < 50; i++) {
            sum += a[i]*b[i];
        }
        return sum;
    }
    static native @MemberGetter FloatPointer ac();
    static native @MemberGetter FloatPointer bc();
    static native float dotc();

    public static void main(String[] args) {
        FloatBuffer ab = ac().capacity(50).asBuffer();
        FloatBuffer bb = bc().capacity(50).asBuffer();

        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        }
        long t1 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
        }
        long t2 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        }
        long t3 = System.nanoTime();
        System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
        System.out.println("dotc(): "  + (t3 - t2)/10000000 + " ns");
    }
}

and Dot.h:

float ac[50], bc[50];

inline float dotc() {
    float sum = 0;
    for (int i = 0; i < 50; i++) {
        sum += ac[i]*bc[i];
    }
    return sum;
}

We can compile and run that with JavaCPP using commands line these:

$ javac -cp javacpp.jar Dot.java
$ java -jar javacpp.jar Dot
$ java -cp javacpp.jar:. Dot

With an Intel Core i7-3632QM CPU @ 2.20GHz, Fedora 20, GCC 4.8.3, and OpenJDK 7 or 8, I get this kind of output:

dot(): 37 ns
dotc(): 23 ns

Or roughly 1.6 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.

这篇关于任何JVM的JIT编译器是否生成使用向量化浮点指令的代码?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆