Spark + Amazon S3"s3a://";网址 [英] Spark + Amazon S3 "s3a://" urls

查看:134
本文介绍了Spark + Amazon S3"s3a://";网址的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

AFAIK是Hadoop + Spark的最新,最好的S3实现,可通过使用"s3a://" URL协议来调用.这在预配置的Amazon EMR上效果很好.

AFAIK, the newest, best S3 implementation for Hadoop + Spark is invoked by using the "s3a://" url protocol. This works great on pre-configured Amazon EMR.

但是,当使用预构建的 spark-2.0.0-bin-hadoop2.7.tgz 在本地开发系统上运行时,我会得到

However, when running on a local dev system using the pre-built spark-2.0.0-bin-hadoop2.7.tgz, I get

Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
    ... 99 more

接下来,我尝试启动指定hadoop-aws插件的Spark作业:

Next I tried to launch my Spark job specifying the hadoop-aws addon:

$SPARK_HOME/bin/spark-submit --master local \
    --packages org.apache.hadoop:hadoop-aws:2.7.3 \
    my_spark_program.py

我知道

    ::::::::::::::::::::::::::::::::::::::::::::::

    ::              FAILED DOWNLOADS            ::

    :: ^ see resolution messages for details  ^ ::

    ::::::::::::::::::::::::::::::::::::::::::::::

    :: com.google.code.findbugs#jsr305;3.0.0!jsr305.jar

    :: org.apache.avro#avro;1.7.4!avro.jar

    :: org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.jar(bundle)

    ::::::::::::::::::::::::::::::::::::::::::::::

我在temp目录中创建了一个虚拟的build.sbt项目,该项目具有这三个依赖项,以查看基本的sbt构建是否可以成功下载这些项目,我得到了:

I made a dummy build.sbt project in a temp directory with those three dependencies to see if a basic sbt build could successfully download those and I got:

[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.avro#avro;1.7.4: several problems occurred while resolving dependency: org.apache.avro#avro;1.7.4 {compile=[default(compile)]}:
[error]     org.apache.avro#avro;1.7.4!avro.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.pom
[error]     org.apache.avro#avro;1.7.4!avro.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.pom
[error] 
[error] unresolved dependency: com.google.code.findbugs#jsr305;3.0.0: several problems occurred while resolving dependency: com.google.code.findbugs#jsr305;3.0.0 {compile=[default(compile)]}:
[error]     com.google.code.findbugs#jsr305;3.0.0!jsr305.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.pom
[error]     com.google.code.findbugs#jsr305;3.0.0!jsr305.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.pom
[error] 
[error] unresolved dependency: org.xerial.snappy#snappy-java;1.0.4.1: several problems occurred while resolving dependency: org.xerial.snappy#snappy-java;1.0.4.1 {compile=[default(compile)]}:
[error]     org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.pom
[error]     org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.pom
[error] Total time: 2 s, completed Sep 2, 2016 6:47:17 PM

关于如何使它工作的任何想法吗?

Any ideas on how I can get this working?

推荐答案

似乎您的提交标志中需要其他jar.Maven存储库具有许多适用于Java的AWS软件包,可用于修复当前错误: https://mvnrepository.com/search?q=aws

It looks like you need additional jars in your submit flag. The Maven repository has a number of AWS packages for Java which you can use to fix your current error: https://mvnrepository.com/search?q=aws

我不断因S3A文件系统错误而感到头痛;但是aws-java-sdk:1.7.4 jar适用于Spark 2.0.

I continuously receive headaches with the S3A filesystem error; but the aws-java-sdk:1.7.4 jar works for Spark 2.0.

有关此事的进一步对话可以在这里找到;尽管在Maven AWS EC2存储库中确实存在一个实际的软件包.

Further dialogue on the matter can be found here; albeit there is indeed an actual package in the Maven AWS EC2 repository.

https://sparkour.urizone.net/recipes/using-s3/

尝试一下:

spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 my_spark_program.py

这篇关于Spark + Amazon S3"s3a://";网址的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆