使用不连续的输入/强制数据求解 ODE [英] Solve ODEs with discontinuous input/forcing data

查看:50
本文介绍了使用不连续的输入/强制数据求解 ODE的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试用 Python 解决耦合的一阶 ODE 系统.我是新手,但是来自 SciPy.org 的

实际上,R 是观察到的每日降雨总量的时间序列.在每天之内,降雨率被假定为恒定的,但是天之间,降雨率会突然变化(即R是一个不连续的 时间函数).我试图了解这对解决我的 ODE 的影响.

策略 1

最明显的策略(至少对我而言)是在每个降雨时间步长内分别应用 SciPy 的 odeint 函数.这意味着我可以将 R 视为常量.像这样:

导入 numpy 为 np,pandas 为 pd,matplotlib.pyplot 为 plt,seaborn 为 sn从 scipy.integrate 导入 o​​deintnp.random.seed(seed=17)def f(y, t, R_t):""" 要集成的功能."""# 解包参数Q_t = y[0]# ODE 求解dQ_dt = (R_t - Q_t)/T返回 dQ_dt########################################################################### 用户输入T = 10 # 时间常数(天)Q0 = 0. # 流出率的初始条件(毫米/天)days = 300 # 要模拟的天数########################################################################### 为 R 创建一个假的每日时间序列# 来自均匀分布的一般随机值df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},指数=范围(天+20))# 用移动窗口平滑,使之更合理df['R'] = pd.rolling_mean(df['R'], window=20)# 由于移动窗口,在开始时切掉 NoDatadf = df[20:].reset_index(drop=True)# 存储结果的列表Q_vals = []# 初始条件向量y0 = [Q0, ]# 循环 R 数据集中的每一天范围内的步长(天):# 我们想在这个时间步结束时找到Q的值t = [0, 1]# 获取这一步的 RR_t = float(df.ix[step])# 求解 ODEsoln = odeint(f, y0, t, args=(R_t,))# 从 soln 中提取步骤结束时的流量Q = float(soln[1])# 附加结果Q_vals.append(Q)# 更新下一步的初始条件y0 = [Q, ]# 将结果添加到 dfdf['Q'] = Q_vals

策略 2

第二种方法是简单地将所有内容提供给 odeint 并让它处理不连续性.使用与上述相同的参数和 R 值:

def f(y, t):""" 函数使用集成."""# 解包 S 和 D 的增量值Q_t = y[0]# 在这个 t 获取 R 的值idx = df.index.get_loc(t, method='ffill')R_t = 浮点数(df.ix[idx])# ODE 求解dQ_dt = (R_t - Q_t)/T返回 dQ_dt# 初始参数值的向量y0 = [Q0, ]# 时间网格t = np.arange(0, 天, 1)# 求解 ODEsoln = odeint(f, y0, t)# 将结果添加到 dfdf['Q'] = soln[:, 0]

这两种方法都给出了相同的答案,如下所示:

然而,第二个策略虽然在代码方面更紧凑,但它比第一个慢.我想这与 R 中的不连续性导致 odeint 出现问题有关?

我的问题

  1. 策略 1 是这里的最佳方法,还是更好的方法?
  2. 策略 2 是个坏主意吗?为什么它这么慢?

谢谢!

解决方案

1.) 是

2.) 是的

两者的原因:Runge-Kutta 求解器期望 ODE 函数的可微阶数至少与求解器的阶数一样高.这是必需的,以便给出预期误差项的泰勒展开式存在.这意味着即使是 1 阶欧拉方法也需要可微的 ODE 函数.因此不允许跳转,可以容忍 1 阶的扭结,但不能容忍更高阶的求解器.

对于具有自动步长调整的实现尤其如此.每当接近不满足微分阶数的点时,求解器就会看到一个刚性系统,并将步长推向 0,这会导致求解器变慢.

如果您使用具有固定步长且步长为 1 天的一小部分的求解器,则可以组合策略 1 和 2.然后在日轮次的采样点作为(隐式)重新开始点,并使用新的常数.

I'm trying to solve a system of coupled, first-order ODEs in Python. I'm new to this, but the Zombie Apocalypse example from SciPy.org has been a great help so far.

An important difference in my case is that the input data used to "drive" my system of ODEs changes abruptly at various time points and I'm not sure how best to deal with this. The code below is the simplest example I can think of to illustrate my problem. I appreciate this example has a straightforward analytical solution, but my actual system of ODEs is more complicated, which is why I'm trying to understand the basics of numerical methods.

Simplified example

Consider a bucket with a hole in the bottom (this kind of "linear reservoir" is the basic building block of many hydrological models). The input flow rate to the bucket is R and the output from the hole is Q. Q is assumed to be proportional to the volume of water in the bucket, V. The constant of proportionality is usually written as , where T is the "residence time" of the store. This gives a simple ODE of the form

In reality, R is an observed time series of daily rainfall totals. Within each day, the rainfall rate is assumed to be constant, but between days the rate changes abruptly (i.e. R is a discontinuous function of time). I'm trying to understand the implications of this for solving my ODEs.

Strategy 1

The most obvious strategy (to me at least) is to apply SciPy's odeint function separately within each rainfall time step. This means I can treat R as a constant. Something like this:

import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sn
from scipy.integrate import odeint

np.random.seed(seed=17)

def f(y, t, R_t):
    """ Function to integrate.
    """
    # Unpack parameters
    Q_t = y[0]

    # ODE to solve
    dQ_dt = (R_t - Q_t)/T

    return dQ_dt

# #############################################################################
# User input   
T = 10      # Time constant (days)
Q0 = 0.     # Initial condition for outflow rate (mm/day)
days = 300  # Number of days to simulate
# #############################################################################

# Create a fake daily time series for R
# Generale random values from uniform dist
df = pd.DataFrame({'R':np.random.uniform(low=0, high=5, size=days+20)},
                  index=range(days+20))

# Smooth with a moving window to make more sensible                  
df['R'] = pd.rolling_mean(df['R'], window=20)

# Chop off the NoData at the start due to moving window
df = df[20:].reset_index(drop=True)

# List to store results
Q_vals = []

# Vector of initial conditions
y0 = [Q0, ]

# Loop over each day in the R dataset
for step in range(days):
    # We want to find the value of Q at the end of this time step
    t = [0, 1]

    # Get R for this step
    R_t = float(df.ix[step])   

    # Solve the ODEs
    soln = odeint(f, y0, t, args=(R_t,))

    # Extract flow at end of step from soln
    Q = float(soln[1])

    # Append result
    Q_vals.append(Q)

    # Update initial condition for next step
    y0 = [Q, ]

# Add results to df
df['Q'] = Q_vals

Strategy 2

The second approach involves simply feeding everything to odeint and letting it deal with the discontinuities. Using the same parameters and R values as above:

def f(y, t):
    """ Function used integrate.
    """
    # Unpack incremental values for S and D
    Q_t = y[0]

    # Get the value for R at this t
    idx = df.index.get_loc(t, method='ffill') 
    R_t = float(df.ix[idx])       

    # ODE to solve
    dQ_dt = (R_t - Q_t)/T

    return dQ_dt

# Vector of initial parameter values
y0 = [Q0, ]

# Time grid
t = np.arange(0, days, 1)

# solve the ODEs
soln = odeint(f, y0, t)

# Add result to df
df['Q'] = soln[:, 0]

Both of these approaches give identical answers, which look like this:

However the second strategy, although more compact in terms of code, it much slower than the first. I guess this is something to do with the discontinuities in R causing problems for odeint?

My questions

  1. Is strategy 1 the best approach here, or is there a better way?
  2. Is strategy 2 a bad idea and why is it so slow?

Thank you!

解决方案

1.) Yes

2.) Yes

Reason for both: Runge-Kutta solvers expect ODE functions that have an order of differentiability at least as high as the order of the solver. This is needed so that the Taylor expansion which gives the expected error term exists. Which means that even the order 1 Euler method expects a differentiable ODE function. Thus no jumps are allowed, kinks can be tolerated in order 1, but not in higher order solvers.

This is especially true for implementations with automatic step size adaptations. Whenever a point is approached where the differentiation order is not satisfied, the solver sees a stiff system and drives the step-size toward 0, which leads to a slowdown of the solver.

You can combine strategies 1 and 2 if you use a solver with fixed step size and a step size that is a fraction of 1 day. Then the sampling points at the day turns serve as (implicit) restart points with the new constant.

这篇关于使用不连续的输入/强制数据求解 ODE的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆