Python有限差分函数? [英] Python finite difference functions?

查看:348
本文介绍了Python有限差分函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在Numpy/Scipy中寻找包含有限差分函数的模块.但是,我找到的最接近的东西是numpy.gradient(),它对于2阶精度的1阶有限差分很有用,但是如果您想要更高阶的导数或更精确的方法,则不是那么多.对于这种事情,我什至还没有找到很多特定的模块.大多数人在需要时似乎正在做自己动手"的事情.因此,我的问题是,是否有人知道专门针对高阶(精度和微分)有限差分方法的任何模块(Numpy/Scipy的一部分或第三方模块).我有自己的代码正在使用,但是目前有点慢,如果已有可用的东西,我就不会尝试对其进行优化.

I've been looking around in Numpy/Scipy for modules containing finite difference functions. However, the closest thing I've found is numpy.gradient(), which is good for 1st-order finite differences of 2nd order accuracy, but not so much if you're wanting higher-order derivatives or more accurate methods. I haven't even found very many specific modules for this sort of thing; most people seem to be doing a "roll-your-own" thing as they need them. So my question is if anyone knows of any modules (either part of Numpy/Scipy or a third-party module) that are specifically dedicated to higher-order (both in accuracy and derivative) finite difference methods. I've got my own code that I'm working on, but it's currently kind of slow, and I'm not going to attempt to optimize it if there's something already available.

请注意,我在说的是有限差分,而不是导数.我见过scipy.misc.derivative() Numdifftools ,它们都采用了我没有的解析函数的派生形式

Note that I am talking about finite differences, not derivatives. I've seen both scipy.misc.derivative() and Numdifftools, which take the derivative of an analytical function, which I don't have.

推荐答案

一种快速实现此目的的方法是与高斯内核的导数进行卷积.最简单的情况是数组与[-1, 1]的卷积,从而给出了简单的有限差分公式.除此之外,(f*g)'= f'*g = f*g'其中的*是卷积,因此最终将导数与普通高斯卷积在一起,因此当然可以稍微平滑数据,可以通过选择最小的合理内核来使其最小化. /p>

One way to do this quickly is by convolution with the derivative of a gaussian kernel. The simple case is a convolution of your array with [-1, 1] which gives exactly the simple finite difference formula. Beyond that, (f*g)'= f'*g = f*g' where the * is convolution, so you end up with your derivative convolved with a plain gaussian, so of course this will smooth your data a bit, which can be minimized by choosing the smallest reasonable kernel.

import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt

#Data:
x = np.linspace(0,2*np.pi,100)
f = np.sin(x) + .02*(np.random.rand(100)-.5)

#Normalization:
dx = x[1] - x[0] # use np.diff(x) if x is not uniform
dxdx = dx**2

#First derivatives:
df = np.diff(f) / dx
cf = np.convolve(f, [1,-1]) / dx
gf = ndimage.gaussian_filter1d(f, sigma=1, order=1, mode='wrap') / dx

#Second derivatives:
ddf = np.diff(f, 2) / dxdx
ccf = np.convolve(f, [1, -2, 1]) / dxdx
ggf = ndimage.gaussian_filter1d(f, sigma=1, order=2, mode='wrap') / dxdx

#Plotting:
plt.figure()
plt.plot(x, f, 'k', lw=2, label='original')
plt.plot(x[:-1], df, 'r.', label='np.diff, 1')
plt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]')
plt.plot(x, gf, 'r', label='gaussian, 1')
plt.plot(x[:-2], ddf, 'g.', label='np.diff, 2')
plt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]')
plt.plot(x, ggf, 'g', label='gaussian, 2')

由于您提到了np.gradient,所以我假设您至少有2d数组,因此以下内容适用于此:scipy.ndimage包中内置了此数组,如果您想对ndarrays这样做的话.不过请谨慎,因为这当然不能给您完整的渐变,但我相信各个方向的乘积.希望拥有更好专业知识的人会说出来.

Since you mentioned np.gradient I assumed you had at least 2d arrays, so the following applies to that: This is built into the scipy.ndimage package if you want to do it for ndarrays. Be cautious though, because of course this doesn't give you the full gradient but I believe the product of all directions. Someone with better expertise will hopefully speak up.

这是一个例子:

from scipy import ndimage

x = np.linspace(0,2*np.pi,100)
sine = np.sin(x)

im = sine * sine[...,None]
d1 = ndimage.gaussian_filter(im, sigma=5, order=1, mode='wrap')
d2 = ndimage.gaussian_filter(im, sigma=5, order=2, mode='wrap')

plt.figure()

plt.subplot(131)
plt.imshow(im)
plt.title('original')

plt.subplot(132)
plt.imshow(d1)
plt.title('first derivative')

plt.subplot(133)
plt.imshow(d2)
plt.title('second derivative')

使用gaussian_filter1d可使您沿特定轴进行方向导数:

Use of the gaussian_filter1d allows you to take a directional derivative along a certain axis:

imx = im * x
d2_0 = ndimage.gaussian_filter1d(imx, axis=0, sigma=5, order=2, mode='wrap')
d2_1 = ndimage.gaussian_filter1d(imx, axis=1, sigma=5, order=2, mode='wrap')

plt.figure()
plt.subplot(131)
plt.imshow(imx)
plt.title('original')
plt.subplot(132)
plt.imshow(d2_0)
plt.title('derivative along axis 0')
plt.subplot(133)
plt.imshow(d2_1)
plt.title('along axis 1')

上面的第一组结果令我有些困惑(当曲率指向向下时,原始峰显示为二阶导数的峰值).在不进一步研究2d版本如何工作的情况下,我只能真正推荐1d版本.如果需要幅度,只需执行以下操作即可:

The first set of results above are a little confusing to me (peaks in the original show up as peaks in the second derivative when the curvature should point down). Without looking further into how the 2d version works, I can only really recomend the 1d version. If you want the magnitude, simply do something like:

d2_mag = np.sqrt(d2_0**2 + d2_1**2)

这篇关于Python有限差分函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆