如何在一次加载中导入多个csv文件? [英] How to import multiple csv files in a single load?
问题描述
考虑一下,我有一个用于在文件夹中加载10个csv文件的已定义架构.有没有一种方法可以使用Spark SQL自动加载表.我知道可以通过为每个文件使用单独的数据框来执行此操作(如下所示),但是可以使用单个命令自动执行该操作,而不是指向一个文件,我可以指向一个文件夹吗?
Consider I have a defined schema for loading 10 csv files in a folder. Is there a way to automatically load tables using Spark SQL. I know this can be performed by using an individual dataframe for each file [given below], but can it be automated with a single command rather than pointing a file can I point a folder?
df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.load("../Downloads/2008.csv")
推荐答案
使用通配符,例如将2008
替换为*
:
Use wildcard, e.g. replace 2008
with *
:
df = sqlContext.read
.format("com.databricks.spark.csv")
.option("header", "true")
.load("../Downloads/*.csv") // <-- note the star (*)
Spark 2.0
// these lines are equivalent in Spark 2.0
spark.read.format("csv").option("header", "true").load("../Downloads/*.csv")
spark.read.option("header", "true").csv("../Downloads/*.csv")
注意:
-
使用
format("csv")
或csv
方法代替format("com.databricks.spark.csv")
.com.databricks.spark.csv
格式已集成到2.0.
Replace
format("com.databricks.spark.csv")
by usingformat("csv")
orcsv
method instead.com.databricks.spark.csv
format has been integrated to 2.0.
使用spark
而不是sqlContext
这篇关于如何在一次加载中导入多个csv文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!