用于大型密集矩阵乘法的循环平铺/阻塞 [英] loop tiling/blocking for large dense matrix multiplication

查看:86
本文介绍了用于大型密集矩阵乘法的循环平铺/阻塞的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否有人可以向我展示如何有效地使用循环平铺/循环阻塞进行大型密集矩阵乘法.我正在用 1000x1000 矩阵做 C = AB.我已经按照 Wikipedia 上的示例进行循环平铺,但使用平铺获得的结果比不使用平铺效果更差.

I was wondering if someone could show me how to use loop tiling/loop blocking for large dense matrix multiplication effectively. I am doing C = AB with 1000x1000 matrices. I have followed the example on Wikipedia for loop tiling but I get worse results using tiling than without.

http://en.wikipedia.org/wiki/Loop_tiling

http://software.intel.com/en-us/articles/how-to-use-loop-blocking-to-optimize-memory-use-on-32-bit-intel-架构

我在下面提供了一些代码.由于缓存未命中,天真的方法非常慢.transpose 方法在缓冲区中创建 B 的转置.此方法给出最快的结果(矩阵乘法为 O(n^3),转置为 O(n^2),因此转置至少快 1000 倍).没有阻塞的wiki方法也很快,不需要缓冲区.阻塞方法较慢.阻塞的另一个问题是它必须多次更新块.这对线程/OpenMP 来说是一个挑战,因为如果不小心,它可能会导致竞争条件.

I have provided some code below. The naive method is very slow due to cache misses. The transpose method creates the transpose of B in a buffer. This method gives the fastest result (matrix multiplication goes as O(n^3) and transpose as O(n^2) so doing the transpose is at least 1000x faster). The wiki method without blocking is also fast and does not need a buffer. The blocking method is slower. Another problem with blocking is it has to update the block several times. This is a challenge for threading/OpenMP because it can cause race conditions if one is not careful.

我应该指出,使用 AVX 并修改转置方法,我获得的结果比 Eigen 更快.但是,我使用 SSE 的结果比 Eigen 慢一些,所以我认为我可以更好地使用缓存.

I should point out that using AVX with a modification of the transpose method I get results faster than Eigen. However, my results with SSE are a bit slower than Eigen so I think I could use caching better.

我想我知道我想做什么.它来自于 1969 年的 Cannon 算法.
http://en.wikipedia.org/wiki/Matrix_multiplication#Communication-avoiding_and_distributed_algorithms

I think I have an idea what I want to do. It comes from Cannon's algorithm from 1969.
http://en.wikipedia.org/wiki/Matrix_multiplication#Communication-avoiding_and_distributed_algorithms

我需要进行块矩阵乘法,并让每个线程处理 C 而不是 A 的子矩阵strong> 和 B.例如,如果我将矩阵分成四个块.我会这样做:

I need to do block matrix multiplication and have each thread handle a sub-matrix of C rather than A and B. For example if I divide my matrices into four blocks. I would do:

+-+      +-+     +-+      +-+   +-+      +-+
|          |     |          |   |          |
| C1    C2 |     | A1    A2 |   | B1    B2 |
|          |  =  |          | x |          |
| C3    C4 |     | A3    A4 |   | B3    B4 |
|          |     |          |   |          |
+-+      +-+     +-+      +-+   +-+      +-+


thread 1: C1 = A1B1 + A2B3
thread 2: C2 = A1B2 + A2B4
thread 3: C3 = A3B1 + A4B3
thread 4: C4 = A3B2 + A4B4

这消除了竞争条件.我得考虑一下.

That removes the race condition. I'll have to think about this.

void matrix_mult_naive(const float*A , const float* B, float* C, const int N, const int M, const int K) {
    #pragma omp parallel for
    for(int i=0; i<N; i++) {
        for(int j=0; j<K; j++) {
            float tmp = 0;
            for(int l=0; l<M; l++) {
                tmp += A[M*i+l]*B[K*l+j];
            }
            C[K*i + j] = tmp;
        }
    }
}
void matrix_mult_transpose(const float*A , const float* B, float* C, const int N, const int M, const int K) {
    float *B2 = (float*)aligned_malloc(M*K*sizeof(float), 32);
    transpose(B, B2, M, K, 1);
    #pragma omp parallel for
    for(int i=0; i<N; i++) {
        for(int j=0; j<K; j++) {
            float tmp = 0;
            for(int l=0; l<M; l++) {
                tmp += A[M*i+l]*B2[M*j+l];
            }
            C[K*i + j] = tmp;
        }
    }
    aligned_free(B2);
}

void matrix_mult_wiki(const float*A , const float* B, float* C, const int N, const int M, const int K) {
    for(int i=0; i<N; i++) {
        for(int j=0; j<K; j++) {
            C[K*i + j] = 0;
        }  
    }
    #pragma omp parallel for
    for(int i=0; i<N; i++) {
        for(int l=0; l<M; l++) {
            float a  = A[M*i+l];
            for(int j=0; j<K; j++) {
                C[K*i + j] += a*B[K*l+j];
            }
        }
    }
}

void matrix_mult_wiki_block(const float*A , const float* B, float* C, const int N, const int M, const int K) {
   const int block_size = 8;  //I have tried several different block sizes
   for(int i=0; i<N; i++) {
       for(int j=0; j<K; j++) {
           C[K*i + j] = 0;
       }
    }
    for(int l2=0; l2<M; l2+=block_size) {
        for(int j2=0; j2<K; j2+=block_size) {
        #pragma omp parallel for
            for(int i=0; i<N; i++) {
                for(int l=l2; l<min(M, l2+block_size); l++) {
                    for(int j=j2; j<min(K, j2+block_size); j++) {
                        C[K*i + j] += A[M*i+l]*B[K*l+j];
                    }
                }
            }
        }
    }
}

推荐答案

我得到的最好结果是再添加一个 for 循环来阻塞你的 N,并通过重新排列循环.我还提升了循环不变代码,但编译器的优化器应该希望自动执行此操作.块大小应该是缓存行大小除以sizeof(float).这比转置方法快约 50%.

The best results I've gotten are by adding one more for loop that blocks over your N, and by rearranging the loops. I also hoisted loop-invariant code, but the compiler's optimizer should hopefully do this automatically. The block size should be the cache line size divided by sizeof(float). This got it ~50% faster than the transposed approach.

如果您只需要选择 AVX 或阻止之一,使用 AVX 扩展(vfmadd###pshaddps)仍然要快得多.考虑到您已经在测试块大小是否是 64/sizeof(float) == 16 个浮点数 == 两个 256 位 AVX 寄存器的倍数,同时使用两者是最好且直接的添加方式.

If you have to pick just one of AVX or blocking, using AVX extensions (vfmadd###ps and haddps) is still substantially faster. Using both is best and straightforward to add given that you're already testing if the block size is a multiple of 64 / sizeof(float) == 16 floats == two 256-bit AVX registers.

  • 转置:1,816,522 个滴答声
  • 平铺:892,431(快 51%)
  • AVX+平铺:230,512(快 87%)

平铺:

void matrix_mult_wiki_block(const float*A , const float* B, float* C,
                            const int N, const int M, const int K) {
    const int block_size = 64 / sizeof(float); // 64 = common cache line size
    for(int i=0; i<N; i++) {
        for(int j=0; j<K; j++) {
            C[K*i + j] = 0;
        }
    }
    for (int i0 = 0; i0 < N; i0 += block_size) {
        int imax = i0 + block_size > N ? N : i0 + block_size;

        for (int j0 = 0; j0 < M; j0 += block_size) {
            int jmax = j0 + block_size > M ? M : j0 + block_size;

            for (int k0 = 0; k0 < K; k0 += block_size) {
                int kmax = k0 + block_size > K ? K : k0 + block_size;

                for (int j1 = j0; j1 < jmax; ++j1) {
                    int sj = M * j1;

                    for (int i1 = i0; i1 < imax; ++i1) {
                        int mi = M * i1;
                        int ki = K * i1;
                        int kij = ki + j1;

                        for (int k1 = k0; k1 < kmax; ++k1) {
                            C[kij] += A[mi + k1] * B[sj + k1];
                        }
                    }
                }
            }
        }
    }
}

<小时>

至于 Cannon 参考,SUMMA 算法 是更好的算法关注.

如果其他人正在优化高瘦乘法({~1e9 x 50} x {50 x 50},我是如何在这里结束的),转置方法在性能上几乎与阻塞方法相同 n=18(浮动).n=18 是一种病态的情况(比 17 或 19 更糟糕),我不太清楚导致这种情况的缓存访问模式.所有较大的 n 都使用阻塞方法进行了改进.

In case anyone else is optimizing tall-skinny multiplications ({~1e9 x 50} x {50 x 50}, how I ended up here), the transposed approach is nearly identical in performance to the blocked approach up to n=18 (floats). n=18 is a pathological case (way worse than 17 or 19) and I don't quite see the cache access patterns that cause this. All larger n are improved with the blocked approach.

这篇关于用于大型密集矩阵乘法的循环平铺/阻塞的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆