奇怪的SciPy ODE集成错误 [英] Odd SciPy ODE Integration error

查看:247
本文介绍了奇怪的SciPy ODE集成错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在实施一个非常简单的易感感染恢复模型,该模型具有稳定的总体,可以用于闲置的副项目-通常是一项非常琐碎的任务.但是我遇到了使用PysCeS或SciPy的求解器错误,这两种方法都使用lsoda作为其基础求解器.这仅在参数的特定值时发生,我为为什么而感到困惑.我正在使用的代码如下:

import numpy as np
from pylab import *
import scipy.integrate as spi

#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50     
gamma=1/10.  
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)

#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
    '''Defining SIR System of Equations'''
    #Creating an array of equations
    Eqs= np.zeros((3))
    Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
    Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
    Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
    return Eqs

SIR = spi.odeint(eq_system, PopIn, t_interval)

这会产生以下错误:

 lsoda--  at current t (=r1), mxstep (=i1) steps   
       taken on this call before reaching tout     
      In above message,  I1 =       500
      In above message,  R1 =  0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.

通常,当我遇到这样的问题时,我设置的方程组最终有问题,但是我都看不到任何问题.奇怪的是,如果将mu更改为1/15550之类,它也可以工作.如果系统出了问题,我在R中实现模型,如下所示:

require(deSolve)

sir.model <- function (t, x, params) {
  S <- x[1]
  I <- x[2]
  R <- x[3]
  with (
    as.list(params),
{
    dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
    dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
    dR <- gamma*I - mu*R
  res <- c(dS,dI,dR)
  list(res)
}
  )
}

times <- seq(0,15000,by=1)
params <- c(
 beta <- 0.50,
 gamma <- 1/10,
 mu <- 1/25550
)

xstart <- c(S = 99, I = 1, R= 0)

out <- as.data.frame(lsoda(xstart,times,sir.model,params))

也使用了lsoda,但似乎没有任何障碍.谁能看到Python代码出了什么问题吗?

解决方案

我认为,对于您选择的参数,您遇到了

非常粗略地讲,您可以大幅增加允许的最大步数,而求解器将最终到达那里:

SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)

一个更好的选择是使用面向对象的ODE求解器scipy.integrate.ode,它允许您显式选择是使用刚性方法还是非刚性方法:

import numpy as np
from pylab import *
import scipy.integrate as spi

def run():
    #Parameter Values
    S0 = 99.
    I0 = 1.
    R0 = 0.
    PopIn= (S0, I0, R0)
    beta= 0.50     
    gamma=1/10.  
    mu = 1/25550.
    t_end = 15000.
    t_start = 1.
    t_step = 1.
    t_interval = np.arange(t_start, t_end, t_step)

    #Solving the differential equation. Solves over t for initial conditions PopIn
    def eq_system(t,PopIn):
        '''Defining SIR System of Equations'''
        #Creating an array of equations
        Eqs= np.zeros((3))
        Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
        Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
        Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
        return Eqs

    ode =  spi.ode(eq_system)

    # BDF method suited to stiff systems of ODEs
    ode.set_integrator('vode',nsteps=500,method='bdf')
    ode.set_initial_value(PopIn,t_start)

    ts = []
    ys = []

    while ode.successful() and ode.t < t_end:
        ode.integrate(ode.t + t_step)
        ts.append(ode.t)
        ys.append(ode.y)

    t = np.vstack(ts)
    s,i,r = np.vstack(ys).T

    fig,ax = subplots(1,1)
    ax.hold(True)
    ax.plot(t,s,label='Susceptible')
    ax.plot(t,i,label='Infected')
    ax.plot(t,r,label='Recovered')
    ax.set_xlim(t_start,t_end)
    ax.set_ylim(0,100)
    ax.set_xlabel('Time')
    ax.set_ylabel('Percent')
    ax.legend(loc=0,fancybox=True)

    return t,s,i,r,fig,ax

输出:

I'm implementing a very simple Susceptible-Infected-Recovered model with a steady population for an idle side project - normally a pretty trivial task. But I'm running into solver errors using either PysCeS or SciPy, both of which use lsoda as their underlying solver. This only happens for particular values of a parameter, and I'm stumped as to why. The code I'm using is as follows:

import numpy as np
from pylab import *
import scipy.integrate as spi

#Parameter Values
S0 = 99.
I0 = 1.
R0 = 0.
PopIn= (S0, I0, R0)
beta= 0.50     
gamma=1/10.  
mu = 1/25550.
t_end = 15000.
t_start = 1.
t_step = 1.
t_interval = np.arange(t_start, t_end, t_step)

#Solving the differential equation. Solves over t for initial conditions PopIn
def eq_system(PopIn,t):
    '''Defining SIR System of Equations'''
    #Creating an array of equations
    Eqs= np.zeros((3))
    Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
    Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
    Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
    return Eqs

SIR = spi.odeint(eq_system, PopIn, t_interval)

This produces the following error:

 lsoda--  at current t (=r1), mxstep (=i1) steps   
       taken on this call before reaching tout     
      In above message,  I1 =       500
      In above message,  R1 =  0.7818108252072E+04
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.

Normally when I encounter a problem like that, there's something terminally wrong with the equation system I set up, but I both can't see anything wrong with it. Weirdly, it also works if you change mu to something like 1/15550. In case it was something wrong with the system, I also implemented the model in R as follows:

require(deSolve)

sir.model <- function (t, x, params) {
  S <- x[1]
  I <- x[2]
  R <- x[3]
  with (
    as.list(params),
{
    dS <- -beta*S*I/(S+I+R) - mu*S + mu*(S+I+R)
    dI <- beta*S*I/(S+I+R) - gamma*I - mu*I
    dR <- gamma*I - mu*R
  res <- c(dS,dI,dR)
  list(res)
}
  )
}

times <- seq(0,15000,by=1)
params <- c(
 beta <- 0.50,
 gamma <- 1/10,
 mu <- 1/25550
)

xstart <- c(S = 99, I = 1, R= 0)

out <- as.data.frame(lsoda(xstart,times,sir.model,params))

This also uses lsoda, but seems to be going off without a hitch. Can anyone see what's going wrong in the Python code?

解决方案

I think that for the parameters you've chosen you're running into problems with stiffness - due to numerical instability the solver's step size is getting pushed into becoming very small in regions where the slope of the solution curve is actually quite shallow. The Fortran solver lsoda, which is wrapped by scipy.integrate.odeint, tries to switch adaptively between methods suited to 'stiff' and 'non-stiff' systems, but in this case it seems to be failing to switch to stiff methods.

Very crudely you can just massively increase the maximum allowed steps and the solver will get there in the end:

SIR = spi.odeint(eq_system, PopIn, t_interval,mxstep=5000000)

A better option is to use the object-oriented ODE solver scipy.integrate.ode, which allows you to explicitly choose whether to use stiff or non-stiff methods:

import numpy as np
from pylab import *
import scipy.integrate as spi

def run():
    #Parameter Values
    S0 = 99.
    I0 = 1.
    R0 = 0.
    PopIn= (S0, I0, R0)
    beta= 0.50     
    gamma=1/10.  
    mu = 1/25550.
    t_end = 15000.
    t_start = 1.
    t_step = 1.
    t_interval = np.arange(t_start, t_end, t_step)

    #Solving the differential equation. Solves over t for initial conditions PopIn
    def eq_system(t,PopIn):
        '''Defining SIR System of Equations'''
        #Creating an array of equations
        Eqs= np.zeros((3))
        Eqs[0]= -beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - mu*PopIn[0] + mu*(PopIn[0]+PopIn[1]+PopIn[2])
        Eqs[1]= (beta * (PopIn[0]*PopIn[1]/(PopIn[0]+PopIn[1]+PopIn[2])) - gamma*PopIn[1] - mu*PopIn[1])
        Eqs[2]= gamma*PopIn[1] - mu*PopIn[2]
        return Eqs

    ode =  spi.ode(eq_system)

    # BDF method suited to stiff systems of ODEs
    ode.set_integrator('vode',nsteps=500,method='bdf')
    ode.set_initial_value(PopIn,t_start)

    ts = []
    ys = []

    while ode.successful() and ode.t < t_end:
        ode.integrate(ode.t + t_step)
        ts.append(ode.t)
        ys.append(ode.y)

    t = np.vstack(ts)
    s,i,r = np.vstack(ys).T

    fig,ax = subplots(1,1)
    ax.hold(True)
    ax.plot(t,s,label='Susceptible')
    ax.plot(t,i,label='Infected')
    ax.plot(t,r,label='Recovered')
    ax.set_xlim(t_start,t_end)
    ax.set_ylim(0,100)
    ax.set_xlabel('Time')
    ax.set_ylabel('Percent')
    ax.legend(loc=0,fancybox=True)

    return t,s,i,r,fig,ax

Output:

这篇关于奇怪的SciPy ODE集成错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆