如何使用Spark-Shell读取.csv文件 [英] How to read .csv file using spark-shell
问题描述
我使用的是带有Standoop的Spark Standalone.我想知道应该导入哪个库才能读取.csv文件?
I am using a spark standalone with hadoop prebuilt. I was wondering what library I should import in order to let me read a .csv file?
I found one library from github: https://github.com/tototoshi/scala-csv But when I typed import com.github.tototoshi.csv._ as illustrated in readme, it doesn't work. Should I do something else before importing it maybe something like buiding it using sbt first? I tried to build using sbt and it doesn't work either (what I did is following the step in the last part of readme, clone the code to my local computer, install sbt and do ./sbt, but doesn't work).
推荐答案
只需启用spark-csv程序包,例如
Just enable spark-csv package e.g.
spark-shell --packages com.databricks:spark-csv_2.10:1.4.0
这将启用 csv
格式,例如
val df = sqlContext.read.format("csv").load("foo.csv")
并且如果您有标题
val df = sqlContext.read.format("csv").option("header", "true").load("foo.csv")
有关所有选项,请参见github存储库 https://github.com/databricks/spark-csv
See github repo for all options https://github.com/databricks/spark-csv
这篇关于如何使用Spark-Shell读取.csv文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!