一次性对 scipy 的“curve_fit"进行多次迭代 [英] Doing many iterations of scipy's `curve_fit` in one go

查看:62
本文介绍了一次性对 scipy 的“curve_fit"进行多次迭代的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

考虑以下 MWE

import numpy as np
from scipy.optimize import curve_fit
X=np.arange(1,10,1)
Y=abs(X+np.random.randn(15,9))

def linear(x, a, b):
    return (x/b)**a

coeffs=[]
for ix in range(Y.shape[0]):
    print(ix)
    c0, pcov = curve_fit(linear, X, Y[ix])
    coeffs.append(c0)


XX=np.tile(X, Y.shape[0])
c0, pcov = curve_fit(linear, XX, Y.flatten())

我有一个问题,我基本上必须这样做,但不是 15 次迭代,而是数千次,而且速度很慢.

I have a problem where I have to do basically that, but instead of 15 iterations it's thousands and it's pretty slow.

有没有办法用 curve_fit 一次性完成所有这些迭代?我知道函数的结果应该是一个一维数组,所以只需像这样传递参数

Is there any way to do all of those iterations at once with curve_fit? I know the result from the function is supposed to be a 1D-array, so just passing the args like this

c0, pcov = curve_fit(nlinear, X, Y)

不会工作.此外,我认为答案必须是扁平化 Y,所以我可以获得扁平化的结果,但我什么也做不了.

is not going to work. Also I think the answer has to be in flattening Y, so I can get a flattened result, but I just can't get anything to work.

编辑

我知道如果我做类似的事情

I know that if I do something like

XX=np.tile(X, Y.shape[0])
c0, pcov = curve_fit(nlinear, XX, Y.flatten())

然后我得到系数的平均"值,但这不是我想要的.

then I get a "mean" value of the coefficients, but that's not what I want.

编辑 2

作为记录,我使用 Jacques Kvam 的设置解决了问题,但使用 Numpy 实现(由于限制)

For the record, I solved with using Jacques Kvam's set-up but implemented using Numpy (because of a limitation)

lX = np.log(X)
lY = np.log(Y)
A = np.vstack([lX, np.ones(len(lX))]).T
m, c=np.linalg.lstsq(A, lY.T)[0]

然后 ma 并得到 b:

And then m is a and to get b:

b=np.exp(-c/m)

推荐答案

最小二乘法不会给出相同的结果,因为在这种情况下噪声是由 log 转换的.如果噪声为零,则两种方法给出相同的结果.

Least squares won't give the same result because the noise is transformed by log in this case. If the noise is zero, both methods give the same result.

import numpy as np
from numpy import random as rng
from scipy.optimize import curve_fit
rng.seed(0)
X=np.arange(1,7)
Y = np.zeros((4, 6))
for i in range(4):
    b = a = i + 1
    Y[i] = (X/b)**a + 0.01 * randn(6)

def linear(x, a, b):
    return (x/b)**a

coeffs=[]
for ix in range(Y.shape[0]):
    print(ix)
    c0, pcov = curve_fit(linear, X, Y[ix])
    coeffs.append(c0)

coefs

[array([ 0.99309127,  0.98742861]),
 array([ 2.00197613,  2.00082722]),
 array([ 2.99130237,  2.99390585]),
 array([ 3.99644048,  3.9992937 ])]

我将使用 scikit-learn 的线性回归实现,因为我相信它可以很好地扩展.

I'll use scikit-learn's implementation of linear regression since I believe that scales well.

from sklearn.linear_model import LinearRegression
lr = LinearRegression()

获取XY

lX = np.log(X)[None, :]
lY = np.log(Y)

现在拟合并检查系数是否与以前相同.

Now fit and check that coeffiecients are the same as before.

lr.fit(lX.T, lY.T)
lr.coef_

给出相似指数.

array([ 0.98613517,  1.98643974,  2.96602892,  4.01718514])

现在检查除数.

np.exp(-lr.intercept_ / lr.coef_.ravel())

给出相似的系数,你可以看到这些方法在他们的答案中有所不同.

Which gives similar coefficient, you can see the methods diverging somewhat though in their answers.

array([ 0.99199406,  1.98234916,  2.90677142,  3.73416501])

这篇关于一次性对 scipy 的“curve_fit"进行多次迭代的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆