Spark DataFrame Schema 可空字段 [英] Spark DataFrame Schema Nullable Fields

查看:88
本文介绍了Spark DataFrame Schema 可空字段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Scala 和 Scala 中编写了以下代码Python,但是返回的 DataFrame 似乎没有应用我正在应用的架构中的不可为空的字段.italianVotes.csv 是一个以~"为分隔符和四个字段的 csv 文件.我使用的是 Spark 2.1.0.

I wrote the following code in both Scala & Python, however the DataFrame that is returned doesn't appear to apply the non-nullable fields in my schema that I am applying. italianVotes.csv is a csv file with '~' as a separator and four fields. I'm using Spark 2.1.0.

2657~135~2~2013-11-22 00:00:00.0
2658~142~2~2013-11-22 00:00:00.0
2659~142~1~2013-11-22 00:00:00.0
2660~140~2~2013-11-22 00:00:00.0
2661~140~1~2013-11-22 00:00:00.0
2662~1354~2~2013-11-22 00:00:00.0
2663~1356~2~2013-11-22 00:00:00.0
2664~1353~2~2013-11-22 00:00:00.0
2665~1351~2~2013-11-22 00:00:00.0
2667~1357~2~2013-11-22 00:00:00.0

斯卡拉

import org.apache.spark.sql.types._
val schema =  StructType(
StructField("id", IntegerType, false) ::
StructField("postId", IntegerType, false) ::
StructField("voteType", IntegerType, true) ::
StructField("time", TimestampType, true) :: Nil)

val fileName = "italianVotes.csv"

val italianDF = spark.read.schema(schema).option("sep", "~").csv(fileName)

italianDF.printSchema()

// output
root
 |-- id: integer (nullable = true)
 |-- postId: integer (nullable = true)
 |-- voteType: integer (nullable = true)
 |-- time: timestamp (nullable = true)

Python

from pyspark.sql.types import *

schema = StructType([
    StructField("id", IntegerType(), False),
    StructField("postId", IntegerType(), False),
    StructField("voteType", IntegerType(), True),
    StructField("time", TimestampType(), True),
])

file_name = "italianVotes.csv"

italian_df = spark.read.csv(file_name, schema = schema, sep = "~")

# print schema
italian_df.printSchema()

root
 |-- id: integer (nullable = true)
 |-- postId: integer (nullable = true)
 |-- voteType: integer (nullable = true)
 |-- time: timestamp (nullable = true)

我的主要问题是,当我在架构中将前两个字段设置为不可为空时,为什么前两个字段可以为空?

My main question is why are the first two fields nullable when I have set them to non-nullable in my schema?

推荐答案

一般来说,Spark Datasets 要么从其父项继承 nullable 属性,要么根据外部数据进行推断类型.

In general Spark Datasets either inherit nullable property from its parents, or infer based on the external data types.

您可以争论这是否是一种好方法,但最终它是明智的.如果数据源的语义不支持可空性约束,那么架构的应用程序也不能.归根结底,假设事情可以是 null 总是比在运行时失败要好,如果相反的假设被证明是不正确的.

You can argue if it is a good approach or not but ultimately it is sensible. If semantics of a data source doesn't support nullability constraints, then application of a schema cannot either. At the end of the day it is always better to assume that things can be null, than fail on the runtime if this the opposite assumption turns out to be incorrect.

这篇关于Spark DataFrame Schema 可空字段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆