unix_timestamp() 能否在 Apache Spark 中以毫秒为单位返回 unix 时间? [英] Can unix_timestamp() return unix time in milliseconds in Apache Spark?
问题描述
我试图从时间戳字段中获取以毫秒(13 位)为单位的 unix 时间,但目前它以秒(10 位)为单位返回.
I'm trying to get the unix time from a timestamp field in milliseconds (13 digits) but currently it returns in seconds (10 digits).
scala> var df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.123", "2017-01-18 11:00:00.882", "2017-01-18 11:00:02.432").toDF()
df: org.apache.spark.sql.DataFrame = [value: string]
scala> df = df.selectExpr("value timeString", "cast(value as timestamp) time")
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp]
scala> df = df.withColumn("unix_time", unix_timestamp(df("time")))
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp ... 1 more field]
scala> df.take(4)
res63: Array[org.apache.spark.sql.Row] = Array(
[2017-01-18 11:00:00.000,2017-01-18 11:00:00.0,1484758800],
[2017-01-18 11:00:00.123,2017-01-18 11:00:00.123,1484758800],
[2017-01-18 11:00:00.882,2017-01-18 11:00:00.882,1484758800],
[2017-01-18 11:00:02.432,2017-01-18 11:00:02.432,1484758802])
即使 2017-01-18 11:00:00.123
和 2017-01-18 11:00:00.000
不同,我也得到相同的 unix 时间1484758800
Even though 2017-01-18 11:00:00.123
and 2017-01-18 11:00:00.000
are different, I get the same unix time back 1484758800
我错过了什么?
推荐答案
实施道氏的回答
import pyspark.sql.functions as F
df = spark.createDataFrame([('22-Jul-2018 04:21:18.792 UTC', ),('23-Jul-2018 04:21:25.888 UTC',)], ['TIME'])
df.show(2,False)
df.printSchema()
输出:
+----------------------------+
|TIME |
+----------------------------+
|22-Jul-2018 04:21:18.792 UTC|
|23-Jul-2018 04:21:25.888 UTC|
+----------------------------+
root
|-- TIME: string (nullable = true)
将字符串时间格式(包括毫秒)转换为unix_timestamp(double).使用子字符串方法从字符串中提取毫秒(start_position = -7,length_of_substring=3)并将毫秒分别添加到 unix_timestamp.(Cast to substring to float for add)
Converting string time-format (including milliseconds ) to unix_timestamp(double). Extracting milliseconds from string using substring method (start_position = -7, length_of_substring=3) and Adding milliseconds seperately to unix_timestamp. (Cast to substring to float for adding)
df1 = df.withColumn("unix_timestamp",F.unix_timestamp(df.TIME,'dd-MMM-yyyy HH:mm:ss.SSS z') + F.substring(df.TIME,-7,3).cast('float')/1000)
在 Spark 中将 unix_timestamp(double) 转换为 timestamp 数据类型.
Converting unix_timestamp(double) to timestamp datatype in Spark.
df2 = df1.withColumn("TimestampType",F.to_timestamp(df1["unix_timestamp"]))
df2.show(n=2,truncate=False)
这将为您提供以下输出
+----------------------------+----------------+-----------------------+
|TIME |unix_timestamp |TimestampType |
+----------------------------+----------------+-----------------------+
|22-Jul-2018 04:21:18.792 UTC|1.532233278792E9|2018-07-22 04:21:18.792|
|23-Jul-2018 04:21:25.888 UTC|1.532319685888E9|2018-07-23 04:21:25.888|
+----------------------------+----------------+-----------------------+
检查架构:
df2.printSchema()
root
|-- TIME: string (nullable = true)
|-- unix_timestamp: double (nullable = true)
|-- TimestampType: timestamp (nullable = true)
这篇关于unix_timestamp() 能否在 Apache Spark 中以毫秒为单位返回 unix 时间?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!