为什么一个循环比两个回路这么多慢? [英] Why is one loop so much slower than two loops?

查看:233
本文介绍了为什么一个循环比两个回路这么多慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设 A1 B1 C1 ,和 D 1 点堆内存和我的数值code具有以下核心循环。

Suppose a1, b1, c1, and d1 point to heap memory and my numerical code has the following core loop.

const int n=100000;

for(int j=0;j<n;j++){
    a1[j] += b1[j];
    c1[j] += d1[j];
}

这循环通过另一外执行10000次的循环。要加快步伐,我改变了code为:

This loop is executed 10,000 times via another outer for loop. To speed it up, I changed the code to:

for(int j=0;j<n;j++){
    a1[j] += b1[j];
}
for(int j=0;j<n;j++){
    c1[j] += d1[j];
}

编译在MS 的Visual C ++ 10.0 ,提供全面的优化和的SSE2 的32位启用一个英特尔酷睿2 双核(64),第一个例子发生5.5&NBSP;秒,双回路例如仅需1.9&NBSP;秒。我的问题是:(请参考我的改写的问题在底部)

Compiled on MS Visual C++ 10.0 with full optimization and SSE2 enabled for 32-bit on a Intel Core 2 Duo (x64), the first example takes 5.5 seconds and the double-loop example takes only 1.9 seconds. My question is: (Please refer to the my rephrased question at the bottom)

PS:我不知道,如果这有助于:

PS: I am not sure, if this helps:

拆卸第一个循环基本上看起来像这样(此块在完整的程序反复五倍左右):

Disassembly for the first loop basically looks like this (this block is repeated about five times in the full program):

movsd       xmm0,mmword ptr [edx+18h]
addsd       xmm0,mmword ptr [ecx+20h]
movsd       mmword ptr [ecx+20h],xmm0
movsd       xmm0,mmword ptr [esi+10h]
addsd       xmm0,mmword ptr [eax+30h]
movsd       mmword ptr [eax+30h],xmm0
movsd       xmm0,mmword ptr [edx+20h]
addsd       xmm0,mmword ptr [ecx+28h]
movsd       mmword ptr [ecx+28h],xmm0
movsd       xmm0,mmword ptr [esi+18h]
addsd       xmm0,mmword ptr [eax+38h]

双环示例中各环路产生这一code(以下块左右重复三次):

Each loop of the double loop example produces this code (the following block is repeated about three times):

addsd       xmm0,mmword ptr [eax+28h]
movsd       mmword ptr [eax+28h],xmm0
movsd       xmm0,mmword ptr [ecx+20h]
addsd       xmm0,mmword ptr [eax+30h]
movsd       mmword ptr [eax+30h],xmm0
movsd       xmm0,mmword ptr [ecx+28h]
addsd       xmm0,mmword ptr [eax+38h]
movsd       mmword ptr [eax+38h],xmm0
movsd       xmm0,mmword ptr [ecx+30h]
addsd       xmm0,mmword ptr [eax+40h]
movsd       mmword ptr [eax+40h],xmm0

编辑:的问题竟然是没有意义的,因为行为严重地依赖于阵列(n)和CPU缓存的大小。所以,如果有进一步的兴趣,我改一下这个问题:

The question turned out to be of no relevance, as the behavior severely depends on the sizes of the arrays (n) and the CPU cache. So if there is further interest, I rephrase the question:

你能否提供一些可靠的洞察,导致如图所示的五个区域的下图中不同的缓存行为细节?

这也可能是有趣地指出CPU /高速缓存架构之间的差异,通过为这些CPU提供了类似的图形。

PPS:在满code http://pastebin.com/ivzkuTzG 。它使用 TBB 的Tick_Count更高分辨率定时,这可以通过不限定被禁用
TBB_TIMING宏。

PPS: The full code is at http://pastebin.com/ivzkuTzG. It uses TBB Tick_Count for higher resolution timing, which can be disabled by not defining the TBB_TIMING Macro.

(它显示FLOP / s的不同值 N

(It shows FLOP/s for different values of n.)

推荐答案

一旦这进一步分析,我相信这是(至少部分)所造成的四个指针的数据对齐。这将导致缓存银行/冲突的方式一定程度的。

Upon further analysis of this, I believe this is (at least partially) caused by data alignment of the four pointers. This will cause some level of cache bank/way conflicts.

如果我对你是如何分配你的阵列猜中,他们的可能对齐到页面行

If I've guessed correctly on how you are allocating your arrays, they are likely to be aligned to the page line.

这意味着,在每个循环所有访问将落在同一个缓存的方式。不过,英特尔处理器已经有一段时间8路L1高速缓存相关性。但在现实中,性能并不完全均匀。访问四方面仍慢于说两方面。

This means that all your accesses in each loop will fall on the same cache way. However, Intel processors have had 8-way L1 cache associativity for a while. But in reality, the performance isn't completely uniform. Accessing 4-ways is still slower than say 2-ways.

编辑:它在实际上看起来像你单独分配所有的阵列
通常,当被要求这样的大分配,分配器将要求从操作系统新鲜的页面。因此,有一个高的机会,大的分配将出现在同一个从页边界的偏移量。

EDIT : It does in fact look like you are allocating all the arrays separately. Usually when such large allocations are requested, the allocator will request fresh pages from the OS. Therefore, there is a high chance that large allocations will appear at the same offset from a page-boundary.

这里的测试code:

int main(){
    const int n = 100000;

#ifdef ALLOCATE_SEPERATE
    double *a1 = (double*)malloc(n * sizeof(double));
    double *b1 = (double*)malloc(n * sizeof(double));
    double *c1 = (double*)malloc(n * sizeof(double));
    double *d1 = (double*)malloc(n * sizeof(double));
#else
    double *a1 = (double*)malloc(n * sizeof(double) * 4);
    double *b1 = a1 + n;
    double *c1 = b1 + n;
    double *d1 = c1 + n;
#endif

    //  Zero the data to prevent any chance of denormals.
    memset(a1,0,n * sizeof(double));
    memset(b1,0,n * sizeof(double));
    memset(c1,0,n * sizeof(double));
    memset(d1,0,n * sizeof(double));

    //  Print the addresses
    cout << a1 << endl;
    cout << b1 << endl;
    cout << c1 << endl;
    cout << d1 << endl;

    clock_t start = clock();

    int c = 0;
    while (c++ < 10000){

#if ONE_LOOP
        for(int j=0;j<n;j++){
            a1[j] += b1[j];
            c1[j] += d1[j];
        }
#else
        for(int j=0;j<n;j++){
            a1[j] += b1[j];
        }
        for(int j=0;j<n;j++){
            c1[j] += d1[j];
        }
#endif

    }

    clock_t end = clock();
    cout << "seconds = " << (double)(end - start) / CLOCKS_PER_SEC << endl;

    system("pause");
    return 0;
}


测试结果:

2×英特尔至强X5482的Harp​​ertown @ 3.2GHz的:

#define ALLOCATE_SEPERATE
#define ONE_LOOP
00600020
006D0020
007A0020
00870020
seconds = 6.206

#define ALLOCATE_SEPERATE
//#define ONE_LOOP
005E0020
006B0020
00780020
00850020
seconds = 2.116

//#define ALLOCATE_SEPERATE
#define ONE_LOOP
00570020
00633520
006F6A20
007B9F20
seconds = 1.894

//#define ALLOCATE_SEPERATE
//#define ONE_LOOP
008C0020
00983520
00A46A20
00B09F20
seconds = 1.993

观察:


  • 6.206秒有一个循环和2.116秒有两个循环。这正是再现了OP的结果。

  • 6.206 seconds with one loop and 2.116 seconds with two loops. This reproduces the OP's results exactly.

在前两个测试中,阵列单独分配。您会发现,它们都具有对准相对于页面一样。

In the first two tests, the arrays are allocated separately. You'll notice that they all have the same alignment relative to the page.

在第二个两次测试中,阵列封装在一起打破这一路线。在这里,你会发现两个循环更快。此外,第二(双)循环现在是较慢的,你通常会期望。

In the second two tests, the arrays are packed together to break that alignment. Here you'll notice both loops are faster. Furthermore, the second (double) loop is now the slower one as you would normally expect.

由于@Stephen大炮在评论中指出的,很可能是这种可能性,这对准原因 假混叠在加载/存储单元或高速缓存。我Google围绕这个发现,实际上英特尔有一个硬件计数器 部分地址别名档位:

As @Stephen Cannon points out in the comments, there is very likely possibility that this alignment causes false aliasing in the load/store units or the cache. I Googled around for this and found that Intel actually has a hardware counter for partial address aliasing stalls:

<一个href=\"http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html\">http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/~amplifierxe/pmw_dp/events/partial_address_alias.html

1区:

这一个是容易的。数据集是如此之小,性能受开销像循环和分支控制。

This one is easy. The dataset is so small that the performance is dominated by overhead like looping and branching.

区域2:

<击>这里,作为数据大小的增加,相对的开销的量下降和性能饱和。这里有两个回路是慢,因为它有两倍之多循环和分支的开销。

我不知道什么是怎么回事...对准仍然是瓦格纳雾提到的缓存银行冲突的。 (该链接是关于Sandy Bridge的,但这个想法仍然应该适用于酷睿2)。

I'm not sure exactly what's going on here... Alignment could still play an effect as Agner Fog mentions cache bank conflicts. (That link is about Sandy Bridge, but the idea should still be applicable to Core 2.)

区域3:

此时,在数据不再适合在L1高速缓存。所以性能由L1&所述封端; - > L2高速缓存带宽

At this point, the data no longer fits in L1 cache. So performance is capped by the L1 <-> L2 cache bandwidth.

区域4:

在单回路的性能下降是我们正在观察。正如所提到的,这是由于它(最有可能)会导致 假混叠在处理器负载/存储单元摊位。

The performance drop in the single-loop is what we are observing. And as mentioned, this is due to the alignment which (most likely) causes false aliasing stalls in the processor load/store units.

但是,为了使假走样发生,必须有数据集之间有足够大的步幅。这就是为什么你没有看到这个区域3。

However, in order for false aliasing to occur, there must be a large enough stride between the datasets. This is why you don't see this in region 3.

5区:

此时,没有适合于高速缓存中。所以,你受到内存带宽的约束。

At this point, nothing fits in cache. So you're bound by memory bandwidth.



这篇关于为什么一个循环比两个回路这么多慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆