Spark Dataframe验证镶木地板写入的列名称 [英] Spark Dataframe validating column names for parquet writes

查看:60
本文介绍了Spark Dataframe验证镶木地板写入的列名称的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用从JSON事件流转换而来的数据帧处理事件,这些数据最终以Parquet格式写出.

但是,某些JSON事件包含键中的空格,我想在将其转换为Parquet之前记录并过滤/删除数据框中的此类事件,因为; {}()\ n \ t = 在下面的 [1] 中列出的Parquet模式(CatalystSchemaConverter)中被认为是特殊字符,因此在列名中不允许使用.

如何在Dataframe中对列名称进行此类验证,并完全删除此类事件,而又不会错失Spark Streaming作业.

[1] Spark的CatalystSchemaConverter

  def checkFieldName(name:String):单位= {//,; {}()\ n \ t =并且空格是Parquet模式中的特殊字符checkConversionRequirement(!name.matches(.* [,; {}()\ n \ t =].*"),s"属性名称"$ name"在"中包含无效字符.,; {}()\\ n \\ t =".|请使用别名重命名."".stripMargin.split(" \ n).mkString(").trim)} 

解决方案

对于在 pyspark 中遇到此问题的每个人:重命名列后,这甚至都发生在我身上.在一些迭代之后,我可以使它工作的一种方法是:

  file ="/opt/myfile.parquet"df = spark.read.parquet(文件)对于df.columns中的c:df = df.withColumnRenamed(c,c.replace(","))df = spark.read.schema(df.schema).parquet(文件) 

I'm processing events using Dataframes converted from a stream of JSON events which eventually gets written out as Parquet format.

However, some of the JSON events contains spaces in the keys which I want to log and filter/drop such events from the data frame before converting it to Parquet because ;{}()\n\t= are considered special characters in Parquet schema (CatalystSchemaConverter) as listed in [1] below and thus should not be allowed in the column names.

How can I do such validations in Dataframe on the column names and drop such an event altogether without erroring out the Spark Streaming job.

[1] Spark's CatalystSchemaConverter

def checkFieldName(name: String): Unit = {
  // ,;{}()\n\t= and space are special characters in Parquet schema
  checkConversionRequirement(
    !name.matches(".*[ ,;{}()\n\t=].*"),
    s"""Attribute name "$name" contains invalid character(s) among " ,;{}()\\n\\t=".
             |Please use alias to rename it.
           """.stripMargin.split("\n").mkString(" ").trim
  )
}

解决方案

For everyone experiencing this in pyspark: this even happened to me after renaming the columns. One way I could get this to work after some iterations is this:

file = "/opt/myfile.parquet"
df = spark.read.parquet(file)
for c in df.columns:
    df = df.withColumnRenamed(c, c.replace(" ", ""))

df = spark.read.schema(df.schema).parquet(file)

这篇关于Spark Dataframe验证镶木地板写入的列名称的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆