为什么使用逻辑Sigmoid的tanh定义比scipy的expit更快? [英] Why is using tanh definition of logistic sigmoid faster than scipy's expit?

查看:267
本文介绍了为什么使用逻辑Sigmoid的tanh定义比scipy的expit更快?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为应用程序使用逻辑Sigmoid.我比较了使用scipy.special函数expit和使用S型双曲线正切定义的时间.

I'm using a logistic sigmoid for an application. I compared the times using the scipy.special function, expit, versus using the hyperbolic tangent definition of the sigmoidal.

我发现双曲正切速度快了3倍.这里发生了什么?我还对排序数组上的时间进行了测试,以查看结果是否有所不同.

I found that the hyperbolic tangent was 3 times as fast. What is going on here? I also tested times on a sorted array to see if the result was any different.

以下是在IPython中运行的示例:

Here is an example that was run in IPython:

In [1]: from scipy.special import expit

In [2]: myexpit = lambda x: 0.5*tanh(0.5*x) + 0.5

In [3]: x = randn(100000)

In [4]: allclose(expit(x), myexpit(x))
Out[4]: True

In [5]: timeit expit(x)
100 loops, best of 3: 15.2 ms per loop

In [6]: timeit myexpit(x)
100 loops, best of 3: 4.94 ms per loop

In [7]: y = sort(x)

In [8]: timeit expit(y)
100 loops, best of 3: 15.3 ms per loop

In [9]: timeit myexpit(y)
100 loops, best of 3: 4.37 ms per loop


机器信息:


Machine info:

  • Ubuntu 16.04
  • RAM:7.4 GB
  • Intel Core i7-3517U CPU @ 1.90GHz×4

Numpy/Scipy信息:

Numpy/Scipy info:

In [1]: np.__version__
Out[1]: '1.12.0'

In [2]: np.__config__.show()
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
blis_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
lapack_mkl_info:
  NOT AVAILABLE
blas_mkl_info:
  NOT AVAILABLE

In [3]: import scipy

In [4]: scipy.__version__
Out[4]: '0.18.1'

推荐答案

我将把未来的人介绍给此问题.

总结有用评论的结果:

为什么使用逻辑Sigmoid的tanh定义比scipy的expit更快?"

"Why is using tanh definition of logistic sigmoid faster than scipy's expit?"

答案:不是;不是.在我的特定计算机上使用tanhexp C函数会发生一些有趣的事情.

Answer: It's not; there's some funny business going on with the tanh and exp C functions on my specific machine.

事实证明,在我的机器上,tanh的C函数比exp快.为何如此的答案显然属于另一个问题.当我运行下面列出的C ++代码时,我看到了

It's turns out that on my machine, the C function for tanh is faster than exp. The answer to why this is the case obviously belongs to a different question. When I run the C++ code listed below, I see

tanh: 5.22203
exp: 14.9393

从Python调用时,

tanh函数中〜3x的增加相匹配.奇怪的是,当我在具有相同操作系统的另一台机器上运行相同的代码时,对于tanhexp,我会得到相似的计时结果.

which matches the ~3x increase in the tanh function when called from Python. The strange thing is that when I run the identical code on a separate machine that has the same OS, I get similar timing results for tanh and exp.

#include <iostream>
#include <cmath>
#include <ctime>

using namespace std;

int main() {
    double a = -5;
    double b =  5;
    int N =  10001;
    double x[10001];
    double y[10001];
    double h = (b-a) / (N-1);

    clock_t begin, end;

    for(int i=0; i < N; i++)
        x[i] = a + i*h;

    begin = clock();

    for(int i=0; i < N; i++)
        for(int j=0; j < N; j++)
            y[i] = tanh(x[i]);

    end = clock();

    cout << "tanh: " << double(end - begin) / CLOCKS_PER_SEC << "\n";

    begin = clock();

    for(int i=0; i < N; i++)
        for(int j=0; j < N; j++)
            y[i] = exp(x[i]);

    end = clock();

    cout << "exp: " << double(end - begin) / CLOCKS_PER_SEC << "\n";


    return 0;
}

这篇关于为什么使用逻辑Sigmoid的tanh定义比scipy的expit更快?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆