如何从S3读取镶木地板数据以生成数据框Python? [英] How to read parquet data from S3 to spark dataframe Python?
本文介绍了如何从S3读取镶木地板数据以生成数据框Python?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我是Spark的新手,我找不到它...我有很多镶木地板文件上传到s3
的位置:
I am new to Spark and I am not able to find this... I have a lot of parquet files uploaded into s3
at location :
s3://a-dps/d-l/sco/alpha/20160930/parquet/
此文件夹的总大小为20+ Gb
.如何分块并将其读入数据框
如何将所有这些文件加载到数据框中?
The total size of this folder is 20+ Gb
,. How to chunk and read this into a dataframe
How to load all these files into a dataframe?
为Spark集群分配的内存为6 GB.
Allocated memory to spark cluster is 6 gb.
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark import SparkConf
from pyspark.sql import SparkSession
import pandas
# SparkConf().set("spark.jars.packages","org.apache.hadoop:hadoop-aws:3.0.0-alpha3")
sc = SparkContext.getOrCreate()
sc._jsc.hadoopConfiguration().set("fs.s3.awsAccessKeyId", 'A')
sc._jsc.hadoopConfiguration().set("fs.s3.awsSecretAccessKey", 's')
sqlContext = SQLContext(sc)
df2 = sqlContext.read.parquet("s3://sm/data/scor/alpha/2016/parquet/*")
错误:
Py4JJavaError: An error occurred while calling o33.parquet.
: java.io.IOException: No FileSystem for scheme: s3
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:372)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$14.apply(DataSource.scala:370)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.immutable.List.flatMap(List.scala:344)
推荐答案
您使用的文件架构(s3
)不正确.您需要使用s3n
模式或s3a
(对于较大的s3对象):
The file schema (s3
)that you are using is not correct. You'll need to use the s3n
schema or s3a
(for bigger s3 objects):
// use sqlContext instead for spark <2
val df = spark.read
.load("s3n://bucket-name/object-path")
我建议您阅读有关 查看全文