Python使用卡尔曼滤波改善模拟,但得到的结果更差 [英] Python using Kalman Filter to improve simulation but getting worse results

查看:44
本文介绍了Python使用卡尔曼滤波改善模拟,但得到的结果更差的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对将卡尔曼滤波(KF)应用于以下预测问题所看到的行为有疑问。我已经包含了一个简单的代码示例。

目标:我想知道KF是否适合用现在(At T)的测量结果改进前一天(t+24小时)的预报/模拟结果。目标是使预测尽可能接近测量

假设: 我们假设测量是完美的(即,如果我们能得到与测量完全匹配的预测,我们就很高兴)。

我们只有一个测量变量(z,实际风速)和一个模拟变量(x,预测风速)。

模拟风速x是由NWP(数值天气预报)软件利用各种气象数据(对我来说是黑匣子)产生的。每天生成模拟文件,每半小时包含一次数据。

我尝试使用我现在获得的测量值和现在使用标量卡尔曼过滤器生成的预测数据(t-24小时前生成的)来修正t+24小时预测。作为参考,我使用了: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html

编码:

#! /usr/bin/python

import numpy as np
import pylab

import os


def main():

    # x = 336 data points of simulated wind speed for 7 days * 24 hour * 2 (every half an hour)
    # Imagine at time t, we will get a x_t fvalue or t+48 or a 24 hours later.
    x = load_x()

    # this is a list that will contain 336 data points of our corrected data
    x_sample_predict_list = []

    # z = 336 data points for 7 days * 24 hour * 2 of actual measured wind speed (every half an hour)
    z = load_z()

    # Here is the setup of the scalar kalman filter
    # reference: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html
    # state transition matrix (we simply have a scalar)
    # what you need to multiply the last time's state to get the newest state
    # we get the x_t+1 = A * x_t, since we get the x_t+1 directly for simulation
    # we will have a = 1
    a = 1.0

    # observation matrix
    # what you need to multiply to the state, convert it to the same form as incoming measurement 
    # both state and measurements are wind speed, so set h = 1
    h = 1.0

    Q = 16.0    # expected process variance of predicted Wind Speed
    R = 9.0 # expected measurement variance of Wind Speed

    p_j = Q # process covariance is equal to the initial process covariance estimate

    # Kalman gain is equal to k = hp-_j / (hp-_j + R).  With perfect measurement
    # R = 0, k reduces to k=1/h which is 1
    k = 1.0

    # one week data
    # original R2 = 0.183
    # with delay = 6, R2 = 0.295
    # with delay = 12, R2 = 0.147   
    # with delay = 48, R2 = 0.075
    delay = 6 

    # Kalman loop
    for t, x_sample in enumerate(x):

        if t <= delay:          
            # for the first day of the forecast,
            # we don't have forecast data and measurement 
            # from a day before to do correction
            x_sample_predict = x_sample             
        else: # t > 48
            # for a priori estimate we take x_sample as is
            # x_sample = x^-_j = a x^-_j_1 + b u_j
            # Inside the NWP (numerical weather prediction, 
            # the x_sample should be on x_sample_j-1 (assumption)

            x_sample_predict_prior = a * x_sample

            # we use the measurement from t-delay (ie. could be a day ago)
            # and forecast data from t-delay, to produce a leading residual that can be used to
            # correct the forecast.
            residual = z[t-delay] - h * x_sample_predict_list[t-delay]


            p_j_prior = a**2 * p_j + Q

            k = h * p_j_prior / (h**2 * p_j_prior + R)

            # we update our prediction based on the residual
            x_sample_predict = x_sample_predict_prior + k * residual

            p_j = p_j_prior * (1 - h * k)

            #print k
            #print p_j_prior
            #print p_j
            #raw_input()

        x_sample_predict_list.append(x_sample_predict)

    # initial goodness of fit
    R2_val_initial = calculate_regression(x,z)
    R2_string_initial = "R2 initial: {0:10.3f}, ".format(R2_val_initial)    
    print R2_string_initial     # R2_val_initial = 0.183

    # final goodness of fit
    R2_val_final = calculate_regression(x_sample_predict_list,z)
    R2_string_final = "R2 final: {0:10.3f}, ".format(R2_val_final)  
    print R2_string_final       # R2_val_final = 0.117, which is worse


    timesteps = xrange(len(x))      
    pylab.plot(timesteps,x,'r-', timesteps,z,'b:', timesteps,x_sample_predict_list,'g--')
    pylab.xlabel('Time')
    pylab.ylabel('Wind Speed')
    pylab.title('Simulated Wind Speed vs Actual Wind Speed')
    pylab.legend(('predicted','measured','kalman'))
    pylab.show()


def calculate_regression(x, y):         
    R2 = 0  
    A = np.array( [x, np.ones(len(x))] )
    model, resid = np.linalg.lstsq(A.T, y)[:2]  
    R2_val = 1 - resid[0] / (y.size * y.var())          
    return R2_val

def load_x():
    return np.array([2, 3, 3, 5, 4, 4, 4, 5, 5, 6, 5, 7, 7, 7, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11,
     11, 10, 8, 8, 8, 8, 6, 3, 4, 5, 5, 5, 6, 5, 5, 5, 6, 5, 5, 6, 6, 7, 6, 8, 9, 10,
     12, 11, 10, 10, 10, 11, 11, 10, 8, 8, 9, 8, 9, 9, 9, 9, 8, 9, 8, 11, 11, 11, 12,
     12, 13, 13, 13, 13, 13, 13, 13, 14, 13, 13, 12, 13, 13, 12, 12, 13, 13, 12, 12, 
     11, 12, 12, 19, 18, 17, 15, 13, 14, 14, 14, 13, 12, 12, 12, 12, 11, 10, 10, 10, 
     10, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 7, 7, 8, 8, 8, 6, 5, 5, 
     5, 5, 5, 5, 6, 4, 4, 4, 6, 7, 8, 7, 7, 9, 10, 10, 9, 9, 8, 7, 5, 5, 5, 5, 5, 5, 
     5, 5, 6, 5, 5, 5, 4, 4, 6, 6, 7, 7, 7, 7, 6, 6, 5, 5, 4, 2, 2, 2, 1, 1, 1, 2, 3,
     13, 13, 12, 11, 10, 9, 10, 10, 8, 9, 8, 7, 5, 3, 2, 2, 2, 3, 3, 4, 4, 5, 6, 6,
     7, 7, 7, 6, 6, 6, 7, 6, 6, 5, 4, 4, 3, 3, 3, 2, 2, 1, 5, 5, 3, 2, 1, 2, 6, 7, 
     7, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 
     7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 11, 11, 11, 11, 10, 10, 9, 10, 10, 10, 2, 2,
     2, 3, 1, 1, 3, 4, 5, 8, 9, 9, 9, 9, 8, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7,
     7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 7, 5, 5, 5, 5, 5, 6, 5])

def load_z():
    return np.array([3, 2, 1, 1, 1, 1, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 2, 2, 2,
     2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 4, 4, 4, 5, 4, 4, 5, 5, 5, 6, 6,
     6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 8, 8, 8, 8, 8, 8, 9, 10, 9, 9, 10, 10, 9,
     9, 10, 9, 9, 10, 9, 8, 9, 9, 7, 7, 6, 7, 6, 6, 7, 7, 8, 8, 8, 8, 8, 8, 7, 6, 7,
     8, 8, 7, 8, 9, 9, 9, 9, 10, 9, 9, 9, 8, 8, 10, 9, 10, 10, 9, 9, 9, 10, 9, 8, 7, 
     7, 7, 7, 8, 7, 6, 5, 4, 3, 5, 3, 5, 4, 4, 4, 2, 4, 3, 2, 1, 1, 2, 1, 2, 1, 4, 4,
     4, 4, 4, 3, 3, 3, 1, 1, 1, 1, 2, 3, 3, 2, 3, 3, 3, 2, 2, 5, 4, 2, 5, 4, 1, 1, 1, 
     1, 1, 1, 1, 2, 2, 1, 1, 3, 3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4,
     4, 4, 5, 5, 5, 4, 3, 3, 3, 3, 3, 3, 3, 3, 1, 2, 2, 3, 3, 1, 2, 1, 1, 2, 4, 3, 1,
     1, 2, 0, 0, 0, 2, 1, 0, 0, 2, 3, 2, 4, 4, 3, 3, 4, 5, 5, 5, 4, 5, 4, 4, 4, 5, 5, 
     4, 3, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 5, 5, 5, 4, 5, 5, 5, 5, 6, 5, 5, 8, 9, 8, 9,
     9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 9, 10, 9, 8, 8, 9, 8, 9, 9, 10, 9, 9, 9,
     7, 7, 9, 8, 7, 6, 6, 5, 5, 5, 5, 3, 3, 3, 4, 6, 5, 5, 6, 5])

if __name__ == '__main__': main()  # this avoids executing main on import your_module

观察:

1)如果昨天的预测是过度预测(积极偏向),那么今天,我会通过减去偏向进行修正。在实践中,如果今天我碰巧低估了预测,那么减去积极的偏差就会得出更糟糕的预测。实际上,我观察到整体适合性较差的数据波动范围更大。我的示例有什么问题?

2)大多数卡尔曼滤波资源表明,卡尔曼滤波最小化后验协方差p_j=E{(x_j-x^_j)},并且有证据表明选择K来最小化p_j。但谁能解释最小化后验协方差实际上是如何最小化过程白噪声w的影响的吗?在实时情况下,假设实际风速和实测风速为5m/s,预测风速为6m/s。噪声为w=1 m/s。残差为5-6=-1 m/s。通过将预测中的1 m/s修正为5 m/s,可以得到5 m/s。这是将过程噪声的影响降至最低的方法吗?

3)这是一篇提到用KF来平滑天气预报的论文。http://hal.archives-ouvertes.fr/docs/00/50/59/93/PDF/Louka_etal_jweia2008.pdf。有趣的是,在PG 9eq(7)上,"一旦知道新的观测值y_t,在时间t的x的估计就变成x_t=x_t/t-1=K(y_t-H_t*x_t/t-1)"。如果我参照实际时间来解释它,那么现在一旦知道新的观测值,估计现在就变成x_t…。"我了解KF如何让您的数据接近实时测量。但是,如果您使用t=now的测量数据来修正t=now的预测数据,那么这还算得上是预测吗?

谢谢!

更新1:

4)我在代码中添加了延迟,以调查如果我们希望卡尔曼处理的数据与测量数据时间序列之间的R2从未经处理的数据与测量数据之间改善,预测可以比从当前测量计算的当前偏差晚多少。在本例中,如果测量用于改进预测6个时间步长(从现在起3小时后),它仍然有用(R2从0.183变为0.295)。但如果使用该度量来改进1天后的预测,则会破坏相关性(R2降至0.075)。

推荐答案

我更新了我的测试标量实现,没有假设完美测量R为1,这使卡尔曼增益减少到常量1。现在,我看到时间序列有了改善,均方根误差减少。

#! /usr/bin/python

import numpy as np
import pylab

import os

# RMSE improved
def main():

    # x = 336 data points of simulated wind speed for 7 days * 24 hour * 2 (every half an hour)
    # Imagine at time t, we will get a x_t fvalue or t+48 or a 24 hours later.
    x = load_x()

    # this is a list that will contain 336 data points of our corrected data
    x_sample_predict_list = []

    # z = 336 data points for 7 days * 24 hour * 2 of actual measured wind speed (every half an hour)
    z = load_z()

    # Here is the setup of the scalar kalman filter
    # reference: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html
    # state transition matrix (we simply have a scalar)
    # what you need to multiply the last time's state to get the newest state
    # we get the x_t+1 = A * x_t, since we get the x_t+1 directly for simulation
    # we will have a = 1
    a = 1.0

    # observation matrix
    # what you need to multiply to the state, convert it to the same form as incoming measurement 
    # both state and measurements are wind speed, so set h = 1
    h = 1.0

    Q = 1.0     # expected process noise of predicted Wind Speed    
    R = 1.0     # expected measurement noise of Wind Speed

    p_j = Q # process covariance is equal to the initial process covariance estimate

    # Kalman gain is equal to k = hp-_j / (hp-_j + R).  With perfect measurement
    # R = 0, k reduces to k=1/h which is 1
    k = 1.0

    # one week data
    # original R2 = 0.183
    # with delay = 6, R2 = 0.295
    # with delay = 12, R2 = 0.147   
    # with delay = 48, R2 = 0.075
    delay = 6 

    # Kalman loop
    for t, x_sample in enumerate(x):

        if t <= delay:          
            # for the first day of the forecast,
            # we don't have forecast data and measurement 
            # from a day before to do correction
            x_sample_predict = x_sample             
        else: # t > 48
            # for a priori estimate we take x_sample as is
            # x_sample = x^-_j = a x^-_j_1 + b u_j
            # Inside the NWP (numerical weather prediction, 
            # the x_sample should be on x_sample_j-1 (assumption)

            x_sample_predict_prior = a * x_sample

            # we use the measurement from t-delay (ie. could be a day ago)
            # and forecast data from t-delay, to produce a leading residual that can be used to
            # correct the forecast.
            residual = z[t-delay] - h * x_sample_predict_list[t-delay]

            p_j_prior = a**2 * p_j + Q

            k = h * p_j_prior / (h**2 * p_j_prior + R)

            # we update our prediction based on the residual
            x_sample_predict = x_sample_predict_prior + k * residual

            p_j = p_j_prior * (1 - h * k)

            #print k
            #print p_j_prior
            #print p_j
            #raw_input()

        x_sample_predict_list.append(x_sample_predict)

    # initial goodness of fit
    R2_val_initial = calculate_regression(x,z)
    R2_string_initial = "R2 original: {0:10.3f}, ".format(R2_val_initial)   
    print R2_string_initial     # R2_val_original = 0.183

    original_RMSE = (((x-z)**2).mean())**0.5
    print "original_RMSE"
    print original_RMSE 
    print "
"

    # final goodness of fit
    R2_val_final = calculate_regression(x_sample_predict_list,z)
    R2_string_final = "R2 final: {0:10.3f}, ".format(R2_val_final)  
    print R2_string_final       # R2_val_final = 0.267, which is better

    final_RMSE = (((x_sample_predict-z)**2).mean())**0.5
    print "final_RMSE"
    print final_RMSE    
    print "
"


    timesteps = xrange(len(x))      
    pylab.plot(timesteps,x,'r-', timesteps,z,'b:', timesteps,x_sample_predict_list,'g--')
    pylab.xlabel('Time')
    pylab.ylabel('Wind Speed')
    pylab.title('Simulated Wind Speed vs Actual Wind Speed')
    pylab.legend(('predicted','measured','kalman'))
    pylab.show()


def calculate_regression(x, y):         
    R2 = 0  
    A = np.array( [x, np.ones(len(x))] )
    model, resid = np.linalg.lstsq(A.T, y)[:2]  
    R2_val = 1 - resid[0] / (y.size * y.var())          
    return R2_val

def load_x():
    return np.array([2, 3, 3, 5, 4, 4, 4, 5, 5, 6, 5, 7, 7, 7, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11,
     11, 10, 8, 8, 8, 8, 6, 3, 4, 5, 5, 5, 6, 5, 5, 5, 6, 5, 5, 6, 6, 7, 6, 8, 9, 10,
     12, 11, 10, 10, 10, 11, 11, 10, 8, 8, 9, 8, 9, 9, 9, 9, 8, 9, 8, 11, 11, 11, 12,
     12, 13, 13, 13, 13, 13, 13, 13, 14, 13, 13, 12, 13, 13, 12, 12, 13, 13, 12, 12, 
     11, 12, 12, 19, 18, 17, 15, 13, 14, 14, 14, 13, 12, 12, 12, 12, 11, 10, 10, 10, 
     10, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 7, 7, 8, 8, 8, 6, 5, 5, 
     5, 5, 5, 5, 6, 4, 4, 4, 6, 7, 8, 7, 7, 9, 10, 10, 9, 9, 8, 7, 5, 5, 5, 5, 5, 5, 
     5, 5, 6, 5, 5, 5, 4, 4, 6, 6, 7, 7, 7, 7, 6, 6, 5, 5, 4, 2, 2, 2, 1, 1, 1, 2, 3,
     13, 13, 12, 11, 10, 9, 10, 10, 8, 9, 8, 7, 5, 3, 2, 2, 2, 3, 3, 4, 4, 5, 6, 6,
     7, 7, 7, 6, 6, 6, 7, 6, 6, 5, 4, 4, 3, 3, 3, 2, 2, 1, 5, 5, 3, 2, 1, 2, 6, 7, 
     7, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 
     7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 11, 11, 11, 11, 10, 10, 9, 10, 10, 10, 2, 2,
     2, 3, 1, 1, 3, 4, 5, 8, 9, 9, 9, 9, 8, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7,
     7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 7, 5, 5, 5, 5, 5, 6, 5])

def load_z():
    return np.array([3, 2, 1, 1, 1, 1, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 2, 2, 2,
     2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 4, 4, 4, 5, 4, 4, 5, 5, 5, 6, 6,
     6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 8, 8, 8, 8, 8, 8, 9, 10, 9, 9, 10, 10, 9,
     9, 10, 9, 9, 10, 9, 8, 9, 9, 7, 7, 6, 7, 6, 6, 7, 7, 8, 8, 8, 8, 8, 8, 7, 6, 7,
     8, 8, 7, 8, 9, 9, 9, 9, 10, 9, 9, 9, 8, 8, 10, 9, 10, 10, 9, 9, 9, 10, 9, 8, 7, 
     7, 7, 7, 8, 7, 6, 5, 4, 3, 5, 3, 5, 4, 4, 4, 2, 4, 3, 2, 1, 1, 2, 1, 2, 1, 4, 4,
     4, 4, 4, 3, 3, 3, 1, 1, 1, 1, 2, 3, 3, 2, 3, 3, 3, 2, 2, 5, 4, 2, 5, 4, 1, 1, 1, 
     1, 1, 1, 1, 2, 2, 1, 1, 3, 3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4,
     4, 4, 5, 5, 5, 4, 3, 3, 3, 3, 3, 3, 3, 3, 1, 2, 2, 3, 3, 1, 2, 1, 1, 2, 4, 3, 1,
     1, 2, 0, 0, 0, 2, 1, 0, 0, 2, 3, 2, 4, 4, 3, 3, 4, 5, 5, 5, 4, 5, 4, 4, 4, 5, 5, 
     4, 3, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 5, 5, 5, 4, 5, 5, 5, 5, 6, 5, 5, 8, 9, 8, 9,
     9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 9, 10, 9, 8, 8, 9, 8, 9, 9, 10, 9, 9, 9,
     7, 7, 9, 8, 7, 6, 6, 5, 5, 5, 5, 3, 3, 3, 4, 6, 5, 5, 6, 5])

if __name__ == '__main__': main()  # this avoids executing main on import your_module

这篇关于Python使用卡尔曼滤波改善模拟,但得到的结果更差的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆