Groupby 和重新采样时间序列,以便日期范围保持一致 [英] Groupby and resample timeseries so date ranges are consistent

查看:60
本文介绍了Groupby 和重新采样时间序列,以便日期范围保持一致的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个数据框,它基本上是几个时间序列堆叠在一起.每个时间序列都有一个唯一的标签(组)并且它们有不同的日期范围.

I have a dataframe which is basically several timeseries stacked on top of one another. Each time series has a unique label (group) and they have different date ranges.

date = pd.to_datetime(pd.Series(['2010-01-01', '2010-01-02', '2010-01-03', 
                                  '2010-01-06', '2010-01-01', '2010-01-03']))
group = [1,1,1,1, 2, 2]
value = [1,2,3,4,5,6]
df = pd.DataFrame({'date':date, 'group':group, 'value':value})
df
        date   group   value
0 2010-01-01       1       1
1 2010-01-02       1       2
2 2010-01-03       1       3
3 2010-01-06       1       4
4 2010-01-01       2       5
5 2010-01-03       2       6

我想重新采样数据,以便为日期和组的每个组合都有一个条目(如果当天没有观察或超出日期范围,则将值填充为 NaN).示例输出将是:

I would like to resample the data so that there is an entry for every single combination of date and group (padding values to NaN if there was no observation that day or it's outside the date range). Example output would be:

      date   group   value                 
2010-01-01       1       1
2010-01-02       1       2
2010-01-03       1       3
2010-01-04       1       NaN
2010-01-05       1       NaN
2010-01-06       1       4
2010-01-01       2       5
2010-01-02       2       NaN
2010-01-03       2       6
2010-01-04       2       NaN
2010-01-05       2       NaN
2010-01-06       2       NaN

我有一个有效的解决方案,但我怀疑有更好的方法.我的解决方案是先对数据进行透视,然后对数据进行拆分、分组和重新采样.基本上,真正需要做的就是进行 groupby 和重新采样,但使用整个日期列的最大值和最小值指定重采样的最大和最小范围,但无论如何我都看不到这样做.

I have a solution which works but I suspect there are better approaches. My solution is to first pivot the data then unstack, groupby and resample. Basically all that's really needed is to do a groupby and resample but specifying the max and min ranges of the resampling with the max and min values of the whole date column but I can't see anyway to do that.

df = (df.pivot(index='dates', columns='groups', values='values')
        .unstack()
        .reset_index()
        .set_index('dates')
        .groupby('groups').resample('D').asfreq()
        .drop('groups', axis=1)
        .reset_index()
        .rename(columns={0:'values'}))[['dates','groups', 'values']]

推荐答案

感谢 zipa 获得正确的日期.我已经编辑了我的帖子以纠正我的错误.

Credit to zipa for getting the dates correct. I've edited my post to correct my mistake.

设置索引,然后使用 pandas.MultiIndex.from_product 生成值的笛卡尔积.我还使用 fill_value=0 来填充那些缺失值.

Set the index then use pandas.MultiIndex.from_product to produce the Cartesian product of values. I also use fill_value=0 to fill in those missing values.

d = df.set_index(['date', 'group'])
midx = pd.MultiIndex.from_product(
    [pd.date_range(df.date.min(), df.date.max()), df.group.unique()],
    names=d.index.names
)
d.reindex(midx, fill_value=0).reset_index()

         date  group  value
0  2010-01-01      1      1
1  2010-01-01      2      5
2  2010-01-02      1      2
3  2010-01-02      2      0
4  2010-01-03      1      3
5  2010-01-03      2      6
6  2010-01-04      1      0
7  2010-01-04      2      0
8  2010-01-05      1      0
9  2010-01-05      2      0
10 2010-01-06      1      4
11 2010-01-06      2      0

d = df.set_index(['date', 'group'])
midx = pd.MultiIndex.from_product(
    [pd.date_range(df.date.min(), df.date.max()), df.group.unique()],
    names=d.index.names
)
d.reindex(midx).reset_index()

         date  group  value
0  2010-01-01      1    1.0
1  2010-01-01      2    5.0
2  2010-01-02      1    2.0
3  2010-01-02      2    NaN
4  2010-01-03      1    3.0
5  2010-01-03      2    6.0
6  2010-01-04      1    NaN
7  2010-01-04      2    NaN
8  2010-01-05      1    NaN
9  2010-01-05      2    NaN
10 2010-01-06      1    4.0
11 2010-01-06      2    NaN

<小时>

我们可以做的另一个舞蹈是 OP 尝试的清理版本.我再次使用 fill_value=0 来填充缺失值.我们可以忽略它来生成 NaN.


Another dance we could do is a cleaned up version of OP's attempt. Again I use fill_value=0 to fill in missing values. We could leave that out to produce the NaN.

df.set_index(['date', 'group']) \
  .unstack(fill_value=0) \
  .asfreq('D', fill_value=0) \
  .stack().reset_index()

         date  group  value
0  2010-01-01      1      1
1  2010-01-01      2      5
2  2010-01-02      1      2
3  2010-01-02      2      0
4  2010-01-03      1      3
5  2010-01-03      2      6
6  2010-01-04      1      0
7  2010-01-04      2      0
8  2010-01-05      1      0
9  2010-01-05      2      0
10 2010-01-06      1      4
11 2010-01-06      2      0

df.set_index(['date', 'group']) \
  .unstack() \
  .asfreq('D') \
  .stack(dropna=False).reset_index()

         date  group  value
0  2010-01-01      1    1.0
1  2010-01-01      2    5.0
2  2010-01-02      1    2.0
3  2010-01-02      2    NaN
4  2010-01-03      1    3.0
5  2010-01-03      2    6.0
6  2010-01-04      1    NaN
7  2010-01-04      2    NaN
8  2010-01-05      1    NaN
9  2010-01-05      2    NaN
10 2010-01-06      1    4.0
11 2010-01-06      2    NaN

这篇关于Groupby 和重新采样时间序列,以便日期范围保持一致的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆