向 Pyspark 数据框中的时间戳列添加额外的小时数 [英] Add extra hours to timestamp columns in Pyspark data frame
本文介绍了向 Pyspark 数据框中的时间戳列添加额外的小时数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我在 Pyspark
中有一个数据框.在这个数据框中,我有一列是 timestamp
数据类型.现在我想为时间戳列的每一行添加额外的 2 小时而不创建任何新列.
I have a data frame in Pyspark
. In this data frame I have a column which is of timestamp
data type. Now I want to add extra 2 hours for each row of the timestamp column without creating any new columns.
例如:这是示例数据
df
id testing_time test_name
1 2017-03-12 03:19:58 Raising
2 2017-03-12 03:21:30 sleeping
3 2017-03-12 03:29:40 walking
4 2017-03-12 03:31:23 talking
5 2017-03-12 04:19:47 eating
6 2017-03-12 04:33:51 working
我想要像下面这样的东西.
I want to have something like below.
df1
id testing_time test_name
1 2017-03-12 05:19:58 Raising
2 2017-03-12 05:21:30 sleeping
3 2017-03-12 05:29:40 walking
4 2017-03-12 05:31:23 talking
5 2017-03-12 06:19:47 eating
6 2017-03-12 06:33:51 working
我该怎么做?
推荐答案
一种方法,不需要显式转换并使用 Spark 间隔文字(具有可论证的可读性优势):
One approach, that doesn't require explicit casting and uses Spark interval literals (with arguable readability advantages):
df = df.withColumn('testing_time', df.testing_time + F.expr('INTERVAL 2 HOURS'))
df.show()
+---+-------------------+---------+
| id| testing_time|test_name|
+---+-------------------+---------+
| 1|2017-03-12 05:19:58| Raising|
| 2|2017-03-12 05:21:30| sleeping|
| 3|2017-03-12 05:29:40| walking|
| 4|2017-03-12 05:31:23| talking|
| 5|2017-03-12 06:19:47| eating|
| 6|2017-03-12 06:33:51| working|
+---+-------------------+---------+
或者,完整的:
import pyspark.sql.functions as F
from datetime import datetime
data = [
(1, datetime(2017, 3, 12, 3, 19, 58), 'Raising'),
(2, datetime(2017, 3, 12, 3, 21, 30), 'sleeping'),
(3, datetime(2017, 3, 12, 3, 29, 40), 'walking'),
(4, datetime(2017, 3, 12, 3, 31, 23), 'talking'),
(5, datetime(2017, 3, 12, 4, 19, 47), 'eating'),
(6, datetime(2017, 3, 12, 4, 33, 51), 'working'),
]
df = sqlContext.createDataFrame(data, ['id', 'testing_time', 'test_name'])
df = df.withColumn('testing_time', df.testing_time + F.expr('INTERVAL 2 HOURS'))
df.show()
+---+-------------------+---------+
| id| testing_time|test_name|
+---+-------------------+---------+
| 1|2017-03-12 05:19:58| Raising|
| 2|2017-03-12 05:21:30| sleeping|
| 3|2017-03-12 05:29:40| walking|
| 4|2017-03-12 05:31:23| talking|
| 5|2017-03-12 06:19:47| eating|
| 6|2017-03-12 06:33:51| working|
+---+-------------------+---------+
这篇关于向 Pyspark 数据框中的时间戳列添加额外的小时数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文