将带有架构的火花数据帧转换为json字符串的数据帧 [英] Convert spark Dataframe with schema to dataframe of json String

查看:39
本文介绍了将带有架构的火花数据帧转换为json字符串的数据帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个这样的数据框:

I have a Dataframe like this:

+--+--------+--------+----+-------------+------------------------------+
|id|name    |lastname|age |timestamp    |creditcards                   |
+--+--------+--------+----+-------------+------------------------------+
|1 |michel  |blanc   |35  |1496756626921|[[hr6,3569823], [ee3,1547869]]|
|2 |peter   |barns   |25  |1496756626551|[[ye8,4569872], [qe5,3485762]]|
+--+--------+--------+----+-------------+------------------------------+

我的 df 的架构如下所示:

where the schema of my df is like below:

root
 |-- id: string (nullable = true)
 |-- name: string (nullable = true)
 |-- lastname: string (nullable = true)
 |-- age: string (nullable = true)
 |-- timestamp: string (nullable = true)
 |-- creditcards: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: string (nullable = true)
 |    |    |-- number: string (nullable = true)

我想将每一行转换为知道我的架构的 json 字符串.所以这个数据框会有一个包含 json 的列字符串.第一行应该是这样的:

I would like to convert each line to a json string knowing my schema. So this dataframe would have one column string containing the json. first line should be like this:

{
    "id":"1",
    "name":"michel",
    "lastname":"blanc",
    "age":"35",
    "timestamp":"1496756626921",
    "creditcards":[
        {
            "id":"hr6",
            "number":"3569823"
        },
        {
            "id":"ee3",
            "number":"1547869"  
        }
    ]
}

和数据框的第二行应该是这样的:

and the secone line of the dataframe should be like this:

{
    "id":"2",
    "name":"peter",
    "lastname":"barns",
    "age":"25",
    "timestamp":"1496756626551",
    "creditcards":[
        {
            "id":"ye8",
            "number":"4569872"
        },
        {
            "id":"qe5",
            "number":"3485762"  
        }
    ]
}

我的目标不是将数据帧写入 json 文件.我的目标是将 df1 转换为第二个 df2,以便将 df2 的每个 json 行推送到 kafka 主题我有这个代码来创建数据框:

my goal is not to write the dataframe to json file. My goal is to convert df1 to a second df2 in order to push each json line of df2 to kafka topic I have this code to create the dataframe:

    val line1 = """{"id":"1","name":"michel","lastname":"blanc","age":"35","timestamp":"1496756626921","creditcards":[{"id":"hr6","number":"3569823"},{"id":"ee3","number":"1547869"}]}"""
    val line2 = """{"id":"2","name":"peter","lastname":"barns","age":"25","timestamp":"1496756626551","creditcards":[{"id":"ye8","number":"4569872"}, {"id":"qe5","number":"3485762"}]}"""

    val rdd = sc.parallelize(Seq(line1, line2))
    val df = sqlContext.read.json(rdd)
    df show false
    df printSchema

你有什么想法吗?

推荐答案

如果你只需要一个单列的 DataFrame/Dataset,每列值代表 JSON 中原始 DataFrame 的每一行,你可以简单地应用 toJSON 到您的 DataFrame,如下所示:

If all you need is a single-column DataFrame/Dataset with each column value representing each row of the original DataFrame in JSON, you can simply apply toJSON to your DataFrame, as in the following:

df.show
// +---+------------------------------+---+--------+------+-------------+
// |age|creditcards                   |id |lastname|name  |timestamp    |
// +---+------------------------------+---+--------+------+-------------+
// |35 |[[hr6,3569823], [ee3,1547869]]|1  |blanc   |michel|1496756626921|
// |25 |[[ye8,4569872], [qe5,3485762]]|2  |barns   |peter |1496756626551|
// +---+------------------------------+---+--------+------+-------------+

val dsJson = df.toJSON
// dsJson: org.apache.spark.sql.Dataset[String] = [value: string]

dsJson.show
// +--------------------------------------------------------------------------+
// |value                                                                     |
// +--------------------------------------------------------------------------+
// |{"age":"35","creditcards":[{"id":"hr6","number":"3569823"},{"id":"ee3",...|
// |{"age":"25","creditcards":[{"id":"ye8","number":"4569872"},{"id":"qe5",...|
// +--------------------------------------------------------------------------+

[更新]

要将 name 添加为附加列,您可以使用 from_json 从 JSON 列中提取它:

To add name as an additional column, you can extract it from the JSON column using from_json:

val result = dsJson.withColumn("name", from_json($"value", df.schema)("name"))

result.show
// +--------------------+------+
// |               value|  name|
// +--------------------+------+
// |{"age":"35","cred...|michel|
// |{"age":"25","cred...| peter|
// +--------------------+------+

这篇关于将带有架构的火花数据帧转换为json字符串的数据帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆