如何在Fortran中通过BLAS加快高阶张量收缩的重塑? [英] How to speed up reshape in higher rank tensor contraction by BLAS in Fortran?

查看:78
本文介绍了如何在Fortran中通过BLAS加快高阶张量收缩的重塑?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

相关问题 Fortran:哪种方法可以更快地更改数组的排名?(重塑与指针)

如果我有张量收缩 A [a,b] * B [b,c,d] = C [a,c,d] 如果我使用BLAS,我认为我需要DGEMM(假定实际值),那么我可以

If I have a tensor contraction A[a,b] * B[b,c,d] = C[a,c,d] If I use BLAS, I think I need DGEMM (assume real values), then I can

  1. 第一个整形张量 B [b,c,d] D [b,e] ,其中 e = c * d
  2. DGEMM, A [a,b] * D [b,e] = E [a,e]
  3. E [a,e] 重塑为 C [a,c,d]
  1. first reshape tensor B[b,c,d] as D[b,e] where e = c*d,
  2. DGEMM, A[a,b] * D[b,e] = E[a,e]
  3. reshape E[a,e] into C[a,c,d]

问题是,重塑的速度不是很快:(我在

The problem is, reshape is not that fast :( I saw discussions in Fortran: Which method is faster to change the rank of arrays? (Reshape vs. Pointer) , in the above link, the author met some error messages, except reshape itself.

因此,我问是否有方便的解决方案.

Thus, I am asking if there is a convenient solution.

推荐答案

[为了避免张量和张量的大小在下面混淆,我在尺寸的大小前加上了字母n]

[I have prefaced the size of dimensions with the letter n to avoid confusion in the below between the tensor and the size of the tensor]

如评论中所述,无需重塑. Dgemm 没有张量的概念,只知道数组.它关心的只是将这些数组按正确的顺序排列在内存中.由于Fortran是专栏专栏作家,如果您使用3维数组表示问题中的3维张量B,则它在内存中的布局将与用于表示2维数组的2维数组完全相同.张量D.就矩阵倍数而言,您现在所要做的就是获取点积,该点积构成正确长度的结果.这导致您得出以下结论:如果您告诉dgemm B的前导暗数为nb,第二个暗数为nc * nd,您将得到正确的结果.这导致我们

As discussed in the comments there is no need to reshape. Dgemm has no concept of tensors, it only knows about arrays. All it cares about is that those arrays are laid out in the correct order in memory. As Fortran is column major if you use a 3 dimensional array to represent the 3 dimensional tensor B in the question it will be laid out exactly the same in memory as a 2 dimensional array used to represent the 2 dimensional tensor D. As far as the matrix mult is concerned all you need to do now is get the dot products which form the result to be the right length. This leads you to the conclusion that if you tell dgemm that B has a leading dim of nb, and a second dim of nc*nd you will get the right result. This leads us to

ian@eris:~/work/stack$ gfortran --version
GNU Fortran (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

ian@eris:~/work/stack$ cat reshape.f90
Program reshape_for_blas

  Use, Intrinsic :: iso_fortran_env, Only :  wp => real64, li => int64

  Implicit None

  Real( wp ), Dimension( :, :    ), Allocatable :: a
  Real( wp ), Dimension( :, :, : ), Allocatable :: b
  Real( wp ), Dimension( :, :, : ), Allocatable :: c1, c2
  Real( wp ), Dimension( :, :    ), Allocatable :: d
  Real( wp ), Dimension( :, :    ), Allocatable :: e

  Integer :: na, nb, nc, nd, ne
  
  Integer( li ) :: start, finish, rate

  Write( *, * ) 'na, nb, nc, nd ?'
  Read( *, * ) na, nb, nc, nd
  ne = nc * nd
  Allocate( a ( 1:na, 1:nb ) ) 
  Allocate( b ( 1:nb, 1:nc, 1:nd ) ) 
  Allocate( c1( 1:na, 1:nc, 1:nd ) ) 
  Allocate( c2( 1:na, 1:nc, 1:nd ) ) 
  Allocate( d ( 1:nb, 1:ne ) ) 
  Allocate( e ( 1:na, 1:ne ) ) 

  ! Set up some data
  Call Random_number( a )
  Call Random_number( b )

  ! With reshapes
  Call System_clock( start, rate )
  d = Reshape( b, Shape( d ) )
  Call dgemm( 'N', 'N', na, ne, nb, 1.0_wp, a, Size( a, Dim = 1 ), &
                                            d, Size( d, Dim = 1 ), &
                                    0.0_wp, e, Size( e, Dim = 1 ) )
  c1 = Reshape( e, Shape( c1 ) )
  Call System_clock( finish, rate )
  Write( *, * ) 'Time for reshaping method ', Real( finish - start, wp ) / rate
  
  ! Direct
  Call System_clock( start, rate )
  Call dgemm( 'N', 'N', na, ne, nb, 1.0_wp, a , Size( a , Dim = 1 ), &
                                            b , Size( b , Dim = 1 ), &
                                            0.0_wp, c2, Size( c2, Dim = 1 ) )
  Call System_clock( finish, rate )
  Write( *, * ) 'Time for straight  method ', Real( finish - start, wp ) / rate

  Write( *, * ) 'Difference between result matrices ', Maxval( Abs( c1 - c2 ) )

End Program reshape_for_blas
ian@eris:~/work/stack$ cat in
40 50 60 70
ian@eris:~/work/stack$ gfortran -std=f2008 -Wall -Wextra -fcheck=all reshape.f90  -lblas
ian@eris:~/work/stack$ ./a.out < in
 na, nb, nc, nd ?
 Time for reshaping method    1.0515256000000001E-002
 Time for straight  method    5.8608790000000003E-003
 Difference between result matrices    0.0000000000000000     
ian@eris:~/work/stack$ gfortran -std=f2008 -Wall -Wextra  reshape.f90  -lblas
ian@eris:~/work/stack$ ./a.out < in
 na, nb, nc, nd ?
 Time for reshaping method    1.3585931000000001E-002
 Time for straight  method    1.6730429999999999E-003
 Difference between result matrices    0.0000000000000000     

那是说,我认为值得一提的是,重塑的开销为O(N ^ 2),而矩阵乘法的时间为O(N ^ 3).因此,对于大矩阵,由于整形而导致的开销百分比将趋于零.现在,代码性能不是唯一的考虑因素,代码的可读性和可维护性也非常重要.因此,如果您发现整形方法更具可读性,并且使用的矩阵足够大,以至于不会增加开销,那么您可以很好地使用整形,因为在这种情况下,代码的可读性可能比性能更重要.你的电话.

That said I think it worth noting though that the overhead for reshaping is O(N^2) while the time for the matrix multiply is O(N^3). Thus for large matrices the percentage overhead due to the reshape will tend to zero. Now code performance is not the only consideration, code readability and maintainability is also very important. So, if you find the reshape method much more readable and the matrices you use are sufficiently large that the overhead is not of import, you may well use the reshapes as in this case code readability might be more important than the performance. Your call.

这篇关于如何在Fortran中通过BLAS加快高阶张量收缩的重塑?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆