本征与OpenMP:由于错误共享和线程开销而没有并行化 [英] Eigen & OpenMP : No parallelisation due to false sharing and thread overhead

查看:110
本文介绍了本征与OpenMP:由于错误共享和线程开销而没有并行化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

系统规格:

  1. Intel Xeon E7-v3处理器(4个插槽,16个内核/插槽,2个 线程/核心)
  2. 使用Eigen系列和C ++
  1. Intel Xeon E7-v3 Processor(4 sockets, 16 cores/sockets, 2 threads/core)
  2. Use of Eigen family and C++

以下是代码片段的串行实现:

Following is serial implementation of code snippet:

Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) {

    Eigen::VectorXd row(nCols);
    for (int k=0; k<nCols; ++k) {
        row(k) = get_Matrix_Entry(j,k+nColStart);
    }

} 

double get_Matrix_Entry(int x , int y){
    return exp(-(x-y)*(x-y));
} 

我需要并行化get_Row部分,因为nCols可以大到10 ^ 6,因此,我尝试了以下某些技术:

I need to parallelise the get_Row part as nCols can be as large as 10^6, therefore, I tried certain techniques as:

  1. 天真的并行化:

Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) {  
    Eigen::VectorXd row(nCols);

     #pragma omp parallel for schedule(static,8)    
     for (int k=0; k<nCols; ++k) {
          row(k)    =   get_Matrix_Entry(j,k+nColStart);

     return row;
}

  • 条状采矿:

    Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) { 
        int vec_len = 8;
        Eigen::VectorXd row(nCols) ;
        int i,cols;
        cols=nCols;
        int rem = cols%vec_len;
        if(rem!=0)
            cols-=rem;
    
        #pragma omp parallel for    
        for(int ii=0;ii<cols; ii+=vec_len){
             for(i=ii;i<ii+vec_len;i++){
                 row(i) = get_Matrix_Entry(j,i+nColStart);
             }
        }
    
        for(int jj=i; jj<nCols;jj++)
            row(jj) = get_Matrix_Entry(j,jj+nColStart);
    
        return row;
    }
    

  • 从互联网上避开虚假共享的地方:

    Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) {
        int cache_line_size=8;
        Eigen::MatrixXd row_m(nCols,cache_line_size);
    
        #pragma omp parallel for schedule(static,1)
        for (int k=0; k<nCols; ++k) 
            row_m(k,0)  =   get_Matrix_Entry(j,k+nColStart);
    
        Eigen::VectorXd row(nCols); 
        row = row_m.block(0,0,nCols,1);
    
       return row;
    
    }
    

  • 输出:

    以上任何一种技术都无法减少大型nCol执行get_row所花费的时间,这意味着naice并行化的工作方式与其他技术类似(尽管比串行技术更好),是否有任何建议或方法可以帮助缩短时间?

    None of the above techniques helped in reducing the time taken to execute get_row for large nCols implying naice parallelisation worked similar to the other techniques(although better from serial), any suggestions or method that can help to improve the time?

    正如用户Avi Ginsburg所提到的,我要提到其他一些系统细节:

    As mentioned by user Avi Ginsburg, I am mentioning some other system details:

    • g ++(GCC)是版本4.4.7的编译器
    • 特征库版本为3.3.2
    • 使用的编译器标志为:"-c -fopenmp -Wall -march = native -O3 -funroll-all-loops -ffast-math -ffinite-math-only -I头文件",这里头文件是包含Eigen的文件夹. li>
    • gcc的输出-march = native -Q --help = target->(仅提及某些标志的描述):

    • g++(GCC) is compiler with version 4.4.7
    • Eigen Library Version is 3.3.2
    • Compiler flags used: "-c -fopenmp -Wall -march=native -O3 -funroll-all-loops -ffast-math -ffinite-math-only -I header" , here header is folder containing Eigen.
    • Output of gcc -march=native -Q --help=target->(Description of some flags are mentioned only):

    -mavx [启用]

    -mavx [enabled]

    -mfancy-math-387 [启用]

    -mfancy-math-387 [enabled]

    -mfma [已禁用]

    -mfma [disabled]

    -march = core2

    -march= core2

    有关标志的完整说明,请参见.

    For full desciption of flags please see this.

    推荐答案

    尝试将您的函数重写为单个表达式,并让Eigen对自身进行矢量化,即:

    Try rewriting your functions as a single expression and let Eigen vectorize itself, i.e.:

    Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) {
    
        Eigen::VectorXd row(nCols);
    
        row = (-( Eigen::VectorXd::LinSpaced(nCols, nColStart, nColStart + nCols - 1).array()
                          - double(j)).square()).exp().matrix();
    
        return row;
    }
    

    确保在编译时使用-mavx-mfma(或-march = native).在i7上使我的速度提高了4倍(我知道您正在谈论尝试使用64/128线程,但这只是一个线程).

    Make sure to use -mavx and -mfma (or -march=native) when compiling. Gives me a x4 speedup on an i7 (I know you are talking about trying to use 64/128 threads, but this is with a single thread).

    您可以通过将计算划分为多个段来启用openmp以进一步提高速度:

    You can enable openmp for some further speedup by dividing the computation into segments:

    Eigen::VectorXd get_Row_omp(const int j, const int nColStart, const int nCols) {
    
        Eigen::VectorXd row(nCols);
    
    #pragma omp parallel
        {
            int num_threads = omp_get_num_threads();
            int tid = omp_get_thread_num();
            int n_per_thread = nCols / num_threads;
            if ((n_per_thread * num_threads < nCols)) n_per_thread++;
            int start = tid * n_per_thread;
            int len = n_per_thread;
            if (tid + 1 == num_threads) len = nCols - start;
    
            if(start < nCols)
                row.segment(start, len) = (-(Eigen::VectorXd::LinSpaced(len,
                                   nColStart + start, nColStart + start + len - 1)
                                .array() - double(j)).square()).exp().matrix();
    
        }
        return row;
    
    }
    

    对我来说(4核),在计算10 ^ 8元素时,我获得了〜x3.3的额外加速,但是对于10 ^ 6和/或64/128核(对内核数的归一化,当然).

    For me (4 cores), I get an additional ~x3.3 speedup when computing 10^8 elements, but expect this be lower for 10^6 and/or 64/128 cores (normalizing for number of cores, of course).

    我没有进行任何检查,以确保OMP线程不会超出范围,并且 我在串行版本的Eigen::VectorXd::LinSpaced中混合了第二个和第三个参数.那可能是造成您任何错误的原因.另外,我在这里粘贴了用于测试的代码.我用g++ -std=c++11 -fopenmp -march=native -O3进行编译,以适应您的需求.

    I hadn't placed any checks to make sure that the OMP threads didn't go out of bounds and I had mixed up the second and third arguments in the Eigen::VectorXd::LinSpaced of the serial version. That probably accounted for any errors you had. Additionally, I've pasted the code that I used for testing here. I compiled with g++ -std=c++11 -fopenmp -march=native -O3, adapt to your needs.

    #include <Eigen/Core>
    #include <iostream>
    #include <omp.h>
    
    
    double get_Matrix_Entry(int x, int y) {
            return exp(-(x - y)*(x - y));
    }
    
    Eigen::VectorXd get_RowOld(const int j, const int nColStart, const int nCols) {
    
            Eigen::VectorXd row(nCols);
            for (int k = 0; k<nCols; ++k) {
                    row(k) = get_Matrix_Entry(j, k + nColStart);
            }
            return row;
    }
    
    
    Eigen::VectorXd get_Row(const int j, const int nColStart, const int nCols) {
    
            Eigen::VectorXd row(nCols);
    
            row = (-( Eigen::VectorXd::LinSpaced(nCols, nColStart, nColStart + nCols - 1).array() - double(j)).square()).exp().matrix();
    
            return row;
    }
    
    Eigen::VectorXd get_Row_omp(const int j, const int nColStart, const int nCols) {
    
            Eigen::VectorXd row(nCols);
    
    #pragma omp parallel
            {
                    int num_threads = omp_get_num_threads();
                    int tid = omp_get_thread_num();
                    int n_per_thread = nCols / num_threads;
                    if ((n_per_thread * num_threads < nCols)) n_per_thread++;
                    int start = tid * n_per_thread;
                    int len = n_per_thread;
                    if (tid + 1 == num_threads) len = nCols - start;
    
    
    #pragma omp critical
    {
            std::cout << tid << "/" << num_threads << "\t" << n_per_thread << "\t" << start <<
                                                             "\t" << len << "\t" << start+len << "\n\n";
    }
    
                    if(start < nCols)
                            row.segment(start, len) = (-(Eigen::VectorXd::LinSpaced(len, nColStart + start, nColStart + start + len - 1).array() - double(j)).square()).exp().matrix();
    
            }
            return row;
    }
    
    int main()
    {
            std::cout << EIGEN_WORLD_VERSION << '.' << EIGEN_MAJOR_VERSION << '.' << EIGEN_MINOR_VERSION << '\n';
            volatile int b = 3;
            int sz = 6553600;
            sz = 16;
            b = 6553500;
            b = 3;
            {
                    auto beg = omp_get_wtime();
                    auto r = get_RowOld(5, b, sz);
                    auto end = omp_get_wtime();
                    auto diff = end - beg;
                    std::cout << r.rows() << "\t" << r.cols() << "\n";
    //              std::cout << r.transpose() << "\n";
                    std::cout << "Old: " << r.mean() << "\n" << diff << "\n\n";
    
                    beg = omp_get_wtime();
                    auto r2 = get_Row(5, b, sz);
                    end = omp_get_wtime();
                    diff = end - beg;
                    std::cout << r2.rows() << "\t" << r2.cols() << "\n";
    //              std::cout << r2.transpose() << "\n";
                    std::cout << "Eigen:         " << (r2-r).cwiseAbs().sum() << "\t" << (r-r2).cwiseAbs().mean() << "\n" << diff << "\n\n";
    
                    auto omp_beg = omp_get_wtime();
                    auto r3 = get_Row_omp(5, b, sz);
                    auto omp_end = omp_get_wtime();
                    auto omp_diff = omp_end - omp_beg;
                    std::cout << r3.rows() << "\t" << r3.cols() << "\n";
    //              std::cout << r3.transpose() << "\n";
                    std::cout << "OMP and Eigen: " << (r3-r).cwiseAbs().sum() << "\t" << (r - r3).cwiseAbs().mean() << "\n" << omp_diff << "\n";
            }
    
            return 0;
    
    }
    

    这篇关于本征与OpenMP:由于错误共享和线程开销而没有并行化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆