Matlab的 - bsxfun不超过repmat快? [英] Matlab - bsxfun no longer faster than repmat?

查看:323
本文介绍了Matlab的 - bsxfun不超过repmat快?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图找到规范在Matlab矩阵(零均值,单位方差列)的最快的方法。这一切都归结到这是应用相同的操作,将所有行以矩阵的最快方式。每一个岗位我读过得出了相同的结论: 使用bsxfun代替repmat
本文由Mathworks公司写的一个例子:<一href=\"http://blogs.mathworks.com/loren/2008/08/04/comparing-repmat-and-bsxfun-performance/\">http://blogs.mathworks.com/loren/2008/08/04/comparing-repmat-and-bsxfun-performance/

不过,在我自己的电脑repmat尝试这种时候总是更快。下面是使用相同的code在文章中我的结果:

  M = 1E5;
N = 100;
A =兰特(M,N);frepmat = @()A - repmat(平均值(A),尺寸(A,1),1);
timeit(frepmat)fbsxfun = @()bsxfun(@减,A,意思是(A));
timeit(fbsxfun)

结果:

  ANS =    0.0349
ANS =    0.0391

其实,我永远不能bsxfun在这种情况下表现比repmat更好不管输入矩阵如何或大或小的。

有人能解释一下吗?


解决方案

大多数你读,包括罗兰博客文章,很可能指的是旧版本的MATLAB,建议对其中 bsxfun 是颇有几分比 repmat 更快。在 R2013b (见链接性能一节) , repmat 被重新实现在应用到数字,字符和逻辑论证给予很大的性能提升。在最近的版本中,它可以是相同的速度 bsxfun

有关它的价值,我的机器上用R2014a我得到

  M = 1E5;
N = 100;
A =兰特(M,N);frepmat = @()A - repmat(平均值(A),尺寸(A,1),1);
timeit(frepmat)fbsxfun = @()bsxfun(@减,A,意思是(A));
timeit(fbsxfun)ANS =
      0.03756
ANS =
     0.034831

所以它看起来像 bsxfun 仍然是一个很小的有点快,但不是很多 - 和你的机器上似乎是相反的情况。当然,这些结果很可能再次发生变化,如果变化 A 的大小,或者你申请的操作。

有可能仍然是其他原因preFER比其他,如风华(I preFER bsxfun ,如果可能的话)。


修改:评论者要求特殊原因,preFER bsxfun ,这意味着它可能会使用更少的内存比 repmat 避免临时副本 repmat 没有。​​

我不认为这是真正的情况。例如,打开任务管理器(或在Linux / Mac上当量),观看记忆水平和类型:

 &GT;&GT; M = 1E5; N = 8e3; A =兰特(M,N);
&GT;&GT; B = A - repmat(平均值(A),尺寸(A,1),1);
&GT;&GT;清空b
&GT;&GT; C = bsxfun(@减,A,意思是(A));
&GT;&GT;明确ç

(调整 M N ,直到跳跃都在图中可见,但没有这么大,你用完了内存不足)。

我只看到从两个相同的行为 repmat bsxfun ,它是内存平稳上升到新的水平,没有临时加峰(基本尺寸 A 双)。

这也是这种情况,即使操作在就地进行。再次,看内存类型:

 &GT;&GT; M = 1E5; N = 8e3; A =兰特(M,N);
&GT;&GT; A = A - repmat(平均值(A),尺寸(A,1),1);
&GT;&GT;清除所有
&GT;&GT; M = 1E5; N = 8e3; A =兰特(M,N);
&GT;&GT; A = bsxfun(@减,A,意思是(A));

同样,我清楚地看到从两个相同的行为 repmat bsxfun ,它是内存上升到峰(基本上是翻倍的 A 的大小),然后回落到previous水平。

所以,恐怕我不能看到repmat 之间速度或内存方面多少技术差别和 bsxfun 。我的preference为 bsxfun 真的只是个人preference因为感觉更优雅一点。

I'm trying to find the fastest way of standardizing a matrix in Matlab (zero mean, unit variance columns). It all comes down to which is the quickest way of applying the same operation to all rows in a matrix. Every post I've read come to the same conclusion: use bsxfun instead of repmat. This article, written by Mathworks is an example: http://blogs.mathworks.com/loren/2008/08/04/comparing-repmat-and-bsxfun-performance/

However, when trying this on my own computer repmat is always quicker. Here are my results using the same code as in the article:

m = 1e5;
n = 100;
A = rand(m,n);

frepmat = @() A - repmat(mean(A),size(A,1),1);
timeit(frepmat)

fbsxfun = @() bsxfun(@minus,A,mean(A));
timeit(fbsxfun)

Results:

ans =

    0.0349


ans =

    0.0391

In fact, I can never get bsxfun to perform better than repmat in this situation no matter how small or large the input matrix is.

Can someone explain this?

解决方案

Most of the advice you're reading, including the blog post from Loren, likely refers to old versions of MATLAB, for which bsxfun was quite a bit faster than repmat. In R2013b (see the "Performance" section in the link), repmat was reimplemented to give large performance improvements when applied to numeric, char and logical arguments. In recent versions, it can be about the same speed as bsxfun.

For what it's worth, on my machine with R2014a I get

m = 1e5;
n = 100;
A = rand(m,n);

frepmat = @() A - repmat(mean(A),size(A,1),1);
timeit(frepmat)

fbsxfun = @() bsxfun(@minus,A,mean(A));
timeit(fbsxfun)

ans =
      0.03756
ans =
     0.034831

so it looks like bsxfun is still a tiny bit faster, but not much - and on your machine it seems the reverse is the case. Of course, these results are likely to vary again, if you vary the size of A or the operation you're applying.

There may still be other reasons to prefer one solution over the other, such as elegance (I prefer bsxfun, if possible).


Edit: commenters have asked for a specific reason to prefer bsxfun, implying that it might use less memory than repmat by avoiding a temporary copy that repmat does not.

I don't think this is actually the case. For example, open Task Manager (or the equivalent on Linux/Mac), watch the memory levels, and type:

>> m = 1e5; n = 8e3; A = rand(m,n);
>> B = A - repmat(mean(A),size(A,1),1);
>> clear B
>> C = bsxfun(@minus,A,mean(A));
>> clear C

(Adjust m and n until the jumps are visible in the graph, but not so big you run out of memory).

I see exactly the same behaviour from both repmat and bsxfun, which is that memory rises smoothly to the new level (basically double the size of A) with no temporary additional peak.

This is also the case even if the operation is done in-place. Again, watch the memory and type:

>> m = 1e5; n = 8e3; A = rand(m,n);
>> A = A - repmat(mean(A),size(A,1),1);
>> clear all
>> m = 1e5; n = 8e3; A = rand(m,n);
>> A = bsxfun(@minus,A,mean(A));

Again, I see exactly the same behaviour from both repmat and bsxfun, which is that memory rises to a peak (basically double the size of A), and then falls back to the previous level.

So I'm afraid I can't see much technical difference in terms of either speed or memory between repmat and bsxfun. My preference for bsxfun is really just a personal preference as it feels a bit more elegant.

这篇关于Matlab的 - bsxfun不超过repmat快?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆