创建 Spark 数据帧.无法推断类型的架构:<type 'float'> [英] Create Spark DataFrame. Can not infer schema for type: <type 'float'>
问题描述
有人能帮我解决 Spark DataFrame 的这个问题吗?
Could someone help me solve this problem I have with Spark DataFrame?
当我执行 myFloatRDD.toDF()
时出现错误:
When I do myFloatRDD.toDF()
I get an error:
TypeError: 无法推断类型的架构:type 'float'
TypeError: Can not infer schema for type: type 'float'
我不明白为什么...
示例:
myFloatRdd = sc.parallelize([1.0,2.0,3.0])
df = myFloatRdd.toDF()
谢谢
推荐答案
SparkSession.createDataFrame
,在幕后使用,需要一个 RDD
/list
of Row
/tuple
/list
/* 或 dict
pandas.DataFrame
,除非提供了带有 DataType
的架构.尝试像这样将浮点数转换为元组:
SparkSession.createDataFrame
, which is used under the hood, requires an RDD
/ list
of Row
/tuple
/list
/* or dict
pandas.DataFrame
, unless schema with DataType
is provided. Try to convert float to tuple like this:
myFloatRdd.map(lambda x: (x, )).toDF()
甚至更好:
from pyspark.sql import Row
row = Row("val") # Or some other column name
myFloatRdd.map(row).toDF()
要从标量列表创建 DataFrame
,您必须直接使用 SparkSession.createDataFrame
并提供架构***:
To create a DataFrame
from a list of scalars you'll have to use SparkSession.createDataFrame
directly and provide a schema***:
from pyspark.sql.types import FloatType
df = spark.createDataFrame([1.0, 2.0, 3.0], FloatType())
df.show()
## +-----+
## |value|
## +-----+
## | 1.0|
## | 2.0|
## | 3.0|
## +-----+
但是对于一个简单的范围,最好使用 SparkSession.range
:
but for a simple range it would be better to use SparkSession.range
:
from pyspark.sql.functions import col
spark.range(1, 4).select(col("id").cast("double"))
<小时>
* 不再支持.
* No longer supported.
** Spark SQL 还提供对暴露 __dict__
的 Python 对象的模式推断的有限支持.
** Spark SQL also provides a limited support for schema inference on Python objects exposing __dict__
.
*** 仅在 Spark 2.0 或更高版本中支持.
*** Supported only in Spark 2.0 or later.
这篇关于创建 Spark 数据帧.无法推断类型的架构:<type 'float'>的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!