使用 sparklyr 完成时间序列 [英] Complete time-series with sparklyr

查看:19
本文介绍了使用 sparklyr 完成时间序列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在我的时间序列数据集中查找丢失的分钟数.我为一个小样本的本地性能写了一个 R 代码:

I'm trying to find missing minutes in my time-series-dataset. I wrote an R code for a local performance on a small sample:

test <- dfv %>% mutate(timestamp = as.POSIXct(DaySecFrom.UTC.)) %>% 
complete(timestamp = seq.POSIXt(min(timestamp), max(timestamp), by = 'min'), ElemUID)

但是您不能在 spark_tbl 上使用来自 tidyrcomplete().

But you can't use complete() from tidyr on a spark_tbl.

Error in UseMethod("complete_") : 
  no applicable method for 'complete_' applied to an object of class "c('tbl_spark', 'tbl_sql', 'tbl_lazy', 'tbl')"

这是一些测试数据:

ElemUID ElemName    Kind    Number  DaySecFrom(UTC) DaySecTo(UTC)
399126817   A648/13FKO-66   DEZ     2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
483492732   A661/18FRS-97   DEZ   120.00    2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
399126819   A648/12FKO-2    DEZ    60.00    2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
399126818   A648/12FKO-1    DEZ   180.00    2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
399126816   A648/13FKO-65   DEZ     2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
398331142   A661/31OFN-1    DEZ   120.00    2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
398331143   A661/31OFN-2    DEZ     2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
483492739   A5/28FKN-65 DEZ     2017-07-01 23:58:00.000 2017-07-01 23:59:00.000
483492735   A661/23FRS-97   DEZ    60.00    2017-07-01 23:58:00.000 2017-07-01 23:59:00.000

是否有任何其他方法或解决方法可以在 R 中的 spark-cluster 上解决此任务?我会很高兴得到您的帮助!

Is there any other way or work-around to solve this task on a spark-cluster in R? I would be really happy for your help!

推荐答案

求一个最小值和最大值作为纪元时间:

Find a min and max values as epoch time:

df <- copy_to(sc, tibble(id=1:4, timestamp=c(
    "2017-07-01 23:49:00.000", "2017-07-01 23:50:00.000",
    # 6 minutes gap
    "2017-07-01 23:56:00.000",
    # 1 minute gap
    "2017-07-01 23:58:00.000")
), "df", overwrite=TRUE)

min_max <- df %>% 
  summarise(min(unix_timestamp(timestamp)), max(unix_timestamp(timestamp))) %>% 
  collect() %>% 
  unlist()

生成从min(epoch_time)max(epoch_time) + interval的参考范围:

library(glue) 

query <- glue("SELECT id AS timestamp FROM RANGE({min_max[1]}, {min_max[2] + 60}, 60)") %>%
  as.character()

ref <- spark_session(sc) %>% invoke("sql", query) %>% 
  sdf_register() %>%
  mutate(timestamp = from_unixtime(timestamp, "yyyy-MM-dd HH:mm:ss.SSS"))

外部连接两者:

ref %>% left_join(df, by="timestamp")

# Source:   lazy query [?? x 2]
# Database: spark_connection
   timesptamp                 id
   <chr>                   <int>
 1 2017-07-01 23:49:00.000     1
 2 2017-07-01 23:50:00.000     2
 3 2017-07-01 23:51:00.000    NA
 4 2017-07-01 23:52:00.000    NA
 5 2017-07-01 23:53:00.000    NA
 6 2017-07-01 23:54:00.000    NA
 7 2017-07-01 23:55:00.000    NA
 8 2017-07-01 23:56:00.000     3
 9 2017-07-01 23:57:00.000    NA
10 2017-07-01 23:58:00.000     4
# ... with more rows

注意:

如果您遇到与 SPARK-20145 相关的问题,您可以替换 SQL查询:

If you experience issues related to SPARK-20145 you can replace SQL query with:

spark_session(sc) %>%
  invoke("range", as.integer(min_max[1]), as.integer(min_max[2]), 60L) %>% 
  sdf_register()

这篇关于使用 sparklyr 完成时间序列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆