与 bsxfun 相比,隐式扩展要快多少? [英] How much faster is implicit expansion compared with bsxfun?
问题描述
作为
还使用逻辑运算(而不是算术运算)进行了测试.为简洁起见,这里没有显示结果,但显示了类似的趋势.
结论
根据图表:
- 结果证实隐式扩展对于小数组更快,对于大数组具有类似于
bsxfun
的速度. - 至少在考虑的情况下,沿第一维或沿其他维度扩展似乎没有太大影响.
- 对于小数组,差异可能是十倍或更多.但是请注意,
timeit
对于小尺寸并不准确,因为代码太快(实际上,它会针对如此小的尺寸发出警告). - 当输出的元素数量达到大约
1e5
时,两个速度变得相等.该值可能取决于系统.
由于速度提升仅在数组较小时才显着,在这种情况下,无论哪种方法都非常快,使用隐式扩展或 bsxfun
似乎主要是品味问题,可读性或向后兼容性.
基准代码
清除% NxN, Nx1, 加法/功率N1 = 2.^(4:1:12);t1_bsxfun_add = NaN(size(N1));t1_implicit_add = NaN(size(N1));t1_bsxfun_pow = NaN(size(N1));t1_implicit_pow = NaN(size(N1));对于 k = 1:numel(N1)N = N1(k);x = randn(N,N);y = randn(N,1);% y = randn(1,N);% 使用这一行或前一行t1_bsxfun_add(k) = timeit(@() bsxfun(@plus, x, y));t1_implicit_add(k) = timeit(@() x+y);t1_bsxfun_pow(k) = timeit(@() bsxfun(@power, x, y));t1_implicit_pow(k) = timeit(@() x.^y);结尾% NxNxNxN, Nx1xN, 加法/幂N2 = 轮(sqrt(N1));t2_bsxfun_add = NaN(size(N2));t2_implicit_add = NaN(size(N2));t2_bsxfun_pow = NaN(size(N2));t2_implicit_pow = NaN(size(N2));对于 k = 1:numel(N1)N = N2(k);x = randn(N,N,N,N);y = randn(N,1,N);% y = randn(1,N,N);% 使用这一行或前一行t2_bsxfun_add(k) = timeit(@() bsxfun(@plus, x, y));t2_implicit_add(k) = timeit(@() x+y);t2_bsxfun_pow(k) = timeit(@() bsxfun(@power, x, y));t2_implicit_pow(k) = timeit(@() x.^y);结尾% 地块数字颜色 = get(gca,'ColorOrder');次要情节(121)title('N\times{}N, N\times{}1')% title('N\times{}N, 1\times{}N') % this or the previous设置(gca,'XScale','日志','YScale','日志')坚持,稍等网格开启loglog(N1.^2, t1_bsxfun_add, 's-', 'color', 颜色(1,:))loglog(N1.^2, t1_implicit_add, 's-', 'color', 颜色(2,:))loglog(N1.^2, t1_bsxfun_pow, '^-', 'color', 颜色(1,:))loglog(N1.^2, t1_implicit_pow, '^-', 'color', 颜色(2,:))图例('加法,bsxfun','加法,隐含','功率,bsxfun','功率,隐含')子图(122)title('N\times{}N\times{}N{}\times{}N, N\times{}1\times{}N')% title('N\times{}N\times{}N{}\times{}N, 1\times{}N\times{}N') % this or the previous设置(gca,'XScale','日志','YScale','日志')坚持,稍等网格开启loglog(N2.^4, t2_bsxfun_add, 's-', 'color', 颜色(1,:))loglog(N2.^4, t2_implicit_add, 's-', 'color', 颜色(2,:))loglog(N2.^4, t2_bsxfun_pow, '^-', 'color', 颜色(1,:))loglog(N2.^4, t2_implicit_pow, '^-', 'color', 颜色(2,:))图例('加法,bsxfun','加法,隐含','功率,bsxfun','功率,隐含')
As commented by Steve Eddins, implicit expansion (introduced in Matlab R2016b) is faster than bsxfun
for small array sizes, and has similar speed for large arrays:
In R2016b, implicit expansion works as fast or faster than bsxfun in most cases. The best performance gains for implicit expansion are with small matrix and array sizes. For large matrix sizes, implicit expansion tends to be roughly the same speed as
bsxfun
.
Also, the dimension along which expansion takes place may have an influence:
When there is an expansion in the first dimension, the operators might not be quite as fast as
bsxfun
.
(Thanks to @Poelie and @rayryeng for letting me know about this!)
Two questions naturally arise:
- How much faster is implicit expansion compared with
bsxfun
? - For what array sizes or shapes is the difference significant?
To measure the difference in speed, some tests have been done. The tests consider two different operations:
- addition
- power
and four different shapes of the arrays to be operated on:
N×N
array withN×1
arrayN×N×N×N
array withN×1×N
arrayN×N
array with1×N
arrayN×N×N×N
array with1×N×N
array
For each of the eight combinations of operation and array shapes, the same operation is done with implicit expansion and with bsxfun
. Several values of N
are used, to cover the range from small to large arrays. timeit
is used for reliable timing.
The benchmarking code is given at the end of this answer. It has been run on Matlab R2016b, Windows 10, with 12 GB RAM.
Results
The following graphs show the results. The horizontal axis is the number of elements of the output array, which is a better measure of size than N
is.
Tests have also been done with logical operations (instead of arithmetical). The results are not displayed here for brevity, but show a similar trend.
Conclusions
According to the graphs:
- The results confirm that implicit expansion is faster for small arrays, and has a speed similar to
bsxfun
for large arrays. - Expanding along the first or along other dimensions doesn't seem to have a large influence, at least in the considered cases.
- For small arrays the difference can be of ten times or more. Note, however, that
timeit
is not accurate for small sizes because the code is too fast (in fact, it issues a warning for such small sizes). - The two speeds become equal when the number of elements of the output reaches about
1e5
. This value may be system-dependent.
Since the speed improvement is only significant when the arrays are small, which is a situation in which either approach is very fast anyway, using implicit expansion or bsxfun
seems to be mainly a matter of taste, readability, or backward compatibility.
Benchmarking code
clear
% NxN, Nx1, addition / power
N1 = 2.^(4:1:12);
t1_bsxfun_add = NaN(size(N1));
t1_implicit_add = NaN(size(N1));
t1_bsxfun_pow = NaN(size(N1));
t1_implicit_pow = NaN(size(N1));
for k = 1:numel(N1)
N = N1(k);
x = randn(N,N);
y = randn(N,1);
% y = randn(1,N); % use this line or the preceding one
t1_bsxfun_add(k) = timeit(@() bsxfun(@plus, x, y));
t1_implicit_add(k) = timeit(@() x+y);
t1_bsxfun_pow(k) = timeit(@() bsxfun(@power, x, y));
t1_implicit_pow(k) = timeit(@() x.^y);
end
% NxNxNxN, Nx1xN, addition / power
N2 = round(sqrt(N1));
t2_bsxfun_add = NaN(size(N2));
t2_implicit_add = NaN(size(N2));
t2_bsxfun_pow = NaN(size(N2));
t2_implicit_pow = NaN(size(N2));
for k = 1:numel(N1)
N = N2(k);
x = randn(N,N,N,N);
y = randn(N,1,N);
% y = randn(1,N,N); % use this line or the preceding one
t2_bsxfun_add(k) = timeit(@() bsxfun(@plus, x, y));
t2_implicit_add(k) = timeit(@() x+y);
t2_bsxfun_pow(k) = timeit(@() bsxfun(@power, x, y));
t2_implicit_pow(k) = timeit(@() x.^y);
end
% Plots
figure
colors = get(gca,'ColorOrder');
subplot(121)
title('N\times{}N, N\times{}1')
% title('N\times{}N, 1\times{}N') % this or the preceding
set(gca,'XScale', 'log', 'YScale', 'log')
hold on
grid on
loglog(N1.^2, t1_bsxfun_add, 's-', 'color', colors(1,:))
loglog(N1.^2, t1_implicit_add, 's-', 'color', colors(2,:))
loglog(N1.^2, t1_bsxfun_pow, '^-', 'color', colors(1,:))
loglog(N1.^2, t1_implicit_pow, '^-', 'color', colors(2,:))
legend('Addition, bsxfun', 'Addition, implicit', 'Power, bsxfun', 'Power, implicit')
subplot(122)
title('N\times{}N\times{}N{}\times{}N, N\times{}1\times{}N')
% title('N\times{}N\times{}N{}\times{}N, 1\times{}N\times{}N') % this or the preceding
set(gca,'XScale', 'log', 'YScale', 'log')
hold on
grid on
loglog(N2.^4, t2_bsxfun_add, 's-', 'color', colors(1,:))
loglog(N2.^4, t2_implicit_add, 's-', 'color', colors(2,:))
loglog(N2.^4, t2_bsxfun_pow, '^-', 'color', colors(1,:))
loglog(N2.^4, t2_implicit_pow, '^-', 'color', colors(2,:))
legend('Addition, bsxfun', 'Addition, implicit', 'Power, bsxfun', 'Power, implicit')
这篇关于与 bsxfun 相比,隐式扩展要快多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!