Spark Dataframe 验证 Parquet 写入的列名 [英] Spark Dataframe validating column names for parquet writes

查看:28
本文介绍了Spark Dataframe 验证 Parquet 写入的列名的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用从 JSON 事件流转换而来的数据帧处理事件,最终以 Parquet 格式写出.

I'm processing events using Dataframes converted from a stream of JSON events which eventually gets written out as Parquet format.

但是,一些 JSON 事件在键中包含空格,我想在将其转换为 Parquet 之前从数据框中记录和过滤/删除此类事件,因为 ;{}()\n\t= 被视为 Parquet 架构 (CatalystSchemaConverter) 中的特殊字符,如[1] 下面中所列,因此不应在列名中使用.

However, some of the JSON events contains spaces in the keys which I want to log and filter/drop such events from the data frame before converting it to Parquet because ;{}()\n\t= are considered special characters in Parquet schema (CatalystSchemaConverter) as listed in [1] below and thus should not be allowed in the column names.

如何在 Dataframe 中对列名进行此类验证并完全删除此类事件而不会出错 Spark Streaming 作业.

How can I do such validations in Dataframe on the column names and drop such an event altogether without erroring out the Spark Streaming job.

[1]Spark 的 CatalystSchemaConverter

[1] Spark's CatalystSchemaConverter

def checkFieldName(name: String): Unit = {
  // ,;{}()\n\t= and space are special characters in Parquet schema
  checkConversionRequirement(
    !name.matches(".*[ ,;{}()\n\t=].*"),
    s"""Attribute name "$name" contains invalid character(s) among " ,;{}()\\n\\t=".
             |Please use alias to rename it.
           """.stripMargin.split("\n").mkString(" ").trim
  )
}

推荐答案

对于在 pyspark 中遇到此问题的每个人:在重命名列后,这甚至发生在我身上.经过一些迭代后,我可以让它工作的一种方法是:

For everyone experiencing this in pyspark: this even happened to me after renaming the columns. One way I could get this to work after some iterations is this:

file = "/opt/myfile.parquet"
df = spark.read.parquet(file)
for c in df.columns:
    df = df.withColumnRenamed(c, c.replace(" ", ""))

df = spark.read.schema(df.schema).parquet(file)

这篇关于Spark Dataframe 验证 Parquet 写入的列名的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆