AVX标量运算速度更快 [英] AVX scalar operations are much faster
问题描述
我测试以下简单功能
void mul(double *a, double *b) {
for (int i = 0; i<N; i++) a[i] *= b[i];
}
具有非常大的数组,因此受内存带宽限制.我使用的测试代码如下.当我用-O2
编译时,需要1.7秒.当我用-O2 -mavx
编译时,只需要1.0秒.非vex编码的标量运算速度要慢70%! 这是为什么?
with very large arrays so that it is memory bandwidth bound. The test code I use is below. When I compile with -O2
it takes 1.7 seconds. When I compile with -O2 -mavx
it takes only 1.0 seconds. The non vex-encoded scalar operations are 70% slower! Why is this?
这是-O2
和-O2 -mavx
的程序集.
系统:i7-6700HQ@2.60GHz(Skylake)32 GB内存,Ubuntu 16.10,GCC 6.3
System: i7-6700HQ@2.60GHz (Skylake) 32 GB mem, Ubuntu 16.10, GCC 6.3
测试代码
//gcc -O2 -fopenmp test.c
//or
//gcc -O2 -mavx -fopenmp test.c
#include <string.h>
#include <stdio.h>
#include <x86intrin.h>
#include <omp.h>
#define N 1000000
#define R 1000
void mul(double *a, double *b) {
for (int i = 0; i<N; i++) a[i] *= b[i];
}
int main() {
double *a = (double*)_mm_malloc(sizeof *a * N, 32);
double *b = (double*)_mm_malloc(sizeof *b * N, 32);
//b must be initialized to get the correct bandwidth!!!
memset(a, 1, sizeof *a * N);
memset(b, 1, sizeof *b * N);
double dtime;
const double mem = 3*sizeof(double)*N*R/1024/1024/1024;
const double maxbw = 34.1;
dtime = -omp_get_wtime();
for(int i=0; i<R; i++) mul(a,b);
dtime += omp_get_wtime();
printf("time %.2f s, %.1f GB/s, efficency %.1f%%\n", dtime, mem/dtime, 100*mem/dtime/maxbw);
_mm_free(a), _mm_free(b);
}
推荐答案
该问题与调用omp_get_wtime()
后AVX寄存器的上半部脏污有关.尤其对于Skylake处理器,这是一个问题.
The problem is related to a dirty upper half of an AVX register after calling omp_get_wtime()
. This is a problem particularly for Skylake processors.
我第一次读到有关此问题的信息是此处.从那时起,其他人就观察到了这个问题:此处和此处.
The first time I read about this problem was here. Since then other people have observed this problem: here and here.
使用gdb
我发现omp_get_wtime()
调用clock_gettime
.我重写了我的代码以使用clock_gettime()
,我看到了同样的问题.
Using gdb
I found that omp_get_wtime()
calls clock_gettime
. I rewrote my code to use clock_gettime()
and I see the same problem.
void fix_avx() { __asm__ __volatile__ ( "vzeroupper" : : : ); }
void fix_sse() { }
void (*fix)();
double get_wtime() {
struct timespec time;
clock_gettime(CLOCK_MONOTONIC, &time);
#ifndef __AVX__
fix();
#endif
return time.tv_sec + 1E-9*time.tv_nsec;
}
void dispatch() {
fix = fix_sse;
#if defined(__INTEL_COMPILER)
if (_may_i_use_cpu_feature (_FEATURE_AVX)) fix = fix_avx;
#else
#if defined(__GNUC__) && !defined(__clang__)
__builtin_cpu_init();
#endif
if(__builtin_cpu_supports("avx")) fix = fix_avx;
#endif
}
使用gdb
逐步执行代码,我看到第一次调用clock_gettime
会调用_dl_runtime_resolve_avx()
.我相信问题在于基于
Stepping through code with gdb
I see that the first time clock_gettime
is called it calls _dl_runtime_resolve_avx()
. I believe the problem is in this function based on this comment. This function appears to only be called the first time clock_gettime
is called.
使用GCC时,第一次使用clock_gettime
调用后问题就不再使用//__asm__ __volatile__ ( "vzeroupper" : : : );
了,但是使用Clang(使用clang -O2 -fno-vectorize
则是因为Clang甚至在-O2
处也进行了矢量化),它仅在每次调用
With GCC the problem goes away using //__asm__ __volatile__ ( "vzeroupper" : : : );
after the first call with clock_gettime
however with Clang (using clang -O2 -fno-vectorize
since Clang vectorizes even at -O2
) it only goes away using it after every call to clock_gettime
.
这是我用来测试此代码的代码(使用GCC 6.3和Clang 3.8)
Here is the code I used to test this (with GCC 6.3 and Clang 3.8)
#include <string.h>
#include <stdio.h>
#include <x86intrin.h>
#include <time.h>
void fix_avx() { __asm__ __volatile__ ( "vzeroupper" : : : ); }
void fix_sse() { }
void (*fix)();
double get_wtime() {
struct timespec time;
clock_gettime(CLOCK_MONOTONIC, &time);
#ifndef __AVX__
fix();
#endif
return time.tv_sec + 1E-9*time.tv_nsec;
}
void dispatch() {
fix = fix_sse;
#if defined(__INTEL_COMPILER)
if (_may_i_use_cpu_feature (_FEATURE_AVX)) fix = fix_avx;
#else
#if defined(__GNUC__) && !defined(__clang__)
__builtin_cpu_init();
#endif
if(__builtin_cpu_supports("avx")) fix = fix_avx;
#endif
}
#define N 1000000
#define R 1000
void mul(double *a, double *b) {
for (int i = 0; i<N; i++) a[i] *= b[i];
}
int main() {
dispatch();
const double mem = 3*sizeof(double)*N*R/1024/1024/1024;
const double maxbw = 34.1;
double *a = (double*)_mm_malloc(sizeof *a * N, 32);
double *b = (double*)_mm_malloc(sizeof *b * N, 32);
//b must be initialized to get the correct bandwidth!!!
memset(a, 1, sizeof *a * N);
memset(b, 1, sizeof *b * N);
double dtime;
//dtime = get_wtime(); // call once to fix GCC
//printf("%f\n", dtime);
//fix = fix_sse;
dtime = -get_wtime();
for(int i=0; i<R; i++) mul(a,b);
dtime += get_wtime();
printf("time %.2f s, %.1f GB/s, efficency %.1f%%\n", dtime, mem/dtime, 100*mem/dtime/maxbw);
_mm_free(a), _mm_free(b);
}
如果我使用-z now
(例如clang -O2 -fno-vectorize -z now foo.c
)禁用了惰性函数调用解析,则Clang仅在第一次调用clock_gettime
之后才需要__asm__ __volatile__ ( "vzeroupper" : : : );
,就像GCC.
If I disable lazy function call resolution with -z now
(e.g. clang -O2 -fno-vectorize -z now foo.c
) then Clang only needs __asm__ __volatile__ ( "vzeroupper" : : : );
after the first call to clock_gettime
just like GCC.
我希望使用-z now
在main()
之后只需要__asm__ __volatile__ ( "vzeroupper" : : : );
,但是在第一次调用clock_gettime
之后我仍然需要它.
I expected that with -z now
I would only need __asm__ __volatile__ ( "vzeroupper" : : : );
right after main()
but I still need it after the first call to clock_gettime
.
这篇关于AVX标量运算速度更快的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!