使用Spark Local从GCS加载数据 [英] Loading data from GCS using Spark Local

查看:168
本文介绍了使用Spark Local从GCS加载数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正尝试从本地计算机上的GCS存储桶读取数据,以进行测试.我想对云中的一些数据进行采样 我已经下载了 GCS Hadoop连接器JAR .

I am trying to read data from GCS buckets on my local machine, for testing purposes. I would like to sample some of the data in the cloud I have downloaded the GCS Hadoop Connector JAR.

并按如下所示设置sparkConf:

conf = SparkConf() \
    .setMaster("local[8]") \
    .setAppName("Test") \
    .set("spark.jars", "path/gcs-connector-hadoop2-latest.jar") \
    .set("spark.hadoop.google.cloud.auth.service.account.enable", "true") \
    .set("spark.hadoop.google.cloud.auth.service.account.json.keyfile", "path/to/keyfile")

sc = SparkContext(conf=conf)

spark = SparkSession.builder \
    .config(conf=sc.getConf()) \
    .getOrCreate()

spark.read.json("gs://gcs-bucket")

我也试图像这样设置conf:

I have also tried to set the conf like so:

sc._jsc.hadoopConfiguration().set("fs.AbstractFileSystem.gs.impl",  "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS")
sc._jsc.hadoopConfiguration().set("fs.gs.auth.service.account.json.keyfile", "path/to/keyfile")
sc._jsc.hadoopConfiguration().set("fs.gs.auth.service.account.enable", "true")

我正在通过PIP使用PySpark安装,并使用IntelliJ中的单元测试模块运行代码

I am using PySpark install via PIP and running the code using the unit test module from IntelliJ

py4j.protocol.Py4JJavaError: An error occurred while calling o128.json.
: java.io.IOException: No FileSystem for scheme: gs

我该怎么办?

谢谢!

推荐答案

要解决此问题,除了已经配置的属性外,还需要添加fs.gs.impl属性的配置:

To solve this issue, you need to add configuration for fs.gs.impl property in addition to properties that you already configured:

sc._jsc.hadoopConfiguration().set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")

这篇关于使用Spark Local从GCS加载数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆