在Spark中为数据框中的特定列应用逻辑 [英] Apply a logic for a particular column in dataframe in spark

查看:76
本文介绍了在Spark中为数据框中的特定列应用逻辑的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个数据框,它是从mysql导入的

I have a Dataframe and it has been imported from mysql

dataframe_mysql.show()
+----+---------+-------------------------------------------------------+
|  id|accountid|                                                xmldata|
+----+---------+-------------------------------------------------------+
|1001|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1002|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1003|    12346|<AccountSetup xmlns:xsi="test"><Customers test="test...|
|1004|    12347|<AccountSetup xmlns:xsi="test"><Customers test="test...|
+----+---------+-------------------------------------------------------+

在xmldata列中有xml标记,我需要将其解析为单独数据帧中的结构化数据.

In the xmldata column there is xml tags inside, I need to parse it in a structured data in a seperate dataframe.

以前,我仅将xml文件放在一个文本文件中,然后使用"com.databricks.spark.xml"将其加载到spark数据框中.

Previously I had the xml file alone in a text file, and loaded in a spark dataframe using "com.databricks.spark.xml"

 spark-shell --packages com.databricks:spark-xml_2.10:0.4.1, 
 com.databricks:spark-csv_2.10:1.5.0

 val sqlContext = new org.apache.spark.sql.SQLContext(sc)

 val df = sqlContext.read.format("com.databricks.spark.xml")
 .option("rowTag","Account").load("mypath/Account.xml")

结构化的最终输出

df.show()

 +----------+--------------------+--------------------+--------------+--------------------+-------+....
    |   AcctNbr|         AddlParties|           Addresses|ApplicationInd|       Beneficiaries|ClassCd|....
    +----------+--------------------+--------------------+--------------+--------------------+-------+....
    |AAAAAAAAAA|[[Securities Amer...|[WrappedArray([D,...|             T|[WrappedArray([11...|     35|....
    +----------+--------------------+--------------------+--------------+--------------------+-------+....

当我在数据框中包含xml内容时,请提出如何实现此目标的建议.

Please advice how to achieve the this when I have the xml content inside a dataframe.

推荐答案

由于您尝试将XML数据列拉到单独的DataFrame中,因此仍可以使用spark-xml程序包中的代码.您只需要直接使用他们的阅读器即可.

Since you are trying to pull the XML data column out to a separate DataFrame you can still use the code from spark-xml's package. You just need to use their reader directly.

case class Data(id: Int, accountid: Int, xmldata: String)
val df = Seq(
    Data(1001, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"a\">d</Customers></AccountSetup>"),
    Data(1002, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"b\">e</Customers></AccountSetup>"),
    Data(1003, 12345, "<AccountSetup xmlns:xsi=\"test\"><Customers test=\"c\">f</Customers></AccountSetup>")
).toDF


import com.databricks.spark.xml.XmlReader

val reader = new XmlReader()

// Set options using methods
reader.withRowTag("AccountSetup")

val rdd = df.select("xmldata").map(r => r.getString(0)).rdd
val xmlDF = reader.xmlRdd(spark.sqlContext, rdd)

但是,从长远来看,像philantrovert一样建议使用自定义XML解析的UDF可能会更干净.阅读器类的参考链接

However, a UDF as philantrovert suggests with custom XML parsing would probably be cleaner in the long run. Reference link for the reader class here

这篇关于在Spark中为数据框中的特定列应用逻辑的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆