基于现有列在 DataFrame 中添加新列 [英] Add new column in DataFrame base on existing column
问题描述
我有一个带有日期时间列的 csv 文件:2011-05-02T04:52:09+00:00".
I have a csv file with datetime column: "2011-05-02T04:52:09+00:00".
我正在使用 scala,文件已加载到 spark DataFrame 中,我可以使用 jodas 时间来解析日期:
I am using scala, the file is loaded into spark DataFrame and I can use jodas time to parse the date:
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val df = new SQLContext(sc).load("com.databricks.spark.csv", Map("path" -> "data.csv", "header" -> "true"))
val d = org.joda.time.format.DateTimeFormat.forPattern("yyyy-mm-dd'T'kk:mm:ssZ")
我想根据日期时间字段创建新列以进行时间序列分析.
I would like to create new columns base on datetime field for timeserie analysis.
在 DataFrame 中,如何根据另一列的值创建一列?
In DataFrame, how do I create a column base on value of another column?
我注意到 DataFrame 有以下功能:df.withColumn("dt",column),有没有办法根据现有列的值创建列?
I notice DataFrame has following function: df.withColumn("dt",column), is there a way to create a column base on value of existing column?
谢谢
推荐答案
import org.apache.spark.sql.types.DateType
import org.apache.spark.sql.functions._
import org.joda.time.DateTime
import org.joda.time.format.DateTimeFormat
val d = DateTimeFormat.forPattern("yyyy-mm-dd'T'kk:mm:ssZ")
val dtFunc: (String => Date) = (arg1: String) => DateTime.parse(arg1, d).toDate
val x = df.withColumn("dt", callUDF(dtFunc, DateType, col("dt_string")))
callUDF
、col
包含在 functions
中作为 import
显示
The callUDF
, col
are included in functions
as the import
show
col("dt_string")
中的 dt_string
是您要从中转换的 df 的原始列名称.
The dt_string
inside col("dt_string")
is the origin column name of your df, which you want to transform from.
或者,您可以将最后一条语句替换为:
Alternatively, you could replace the last statement with:
val dtFunc2 = udf(dtFunc)
val x = df.withColumn("dt", dtFunc2(col("dt_string")))
这篇关于基于现有列在 DataFrame 中添加新列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!