为什么齐柏林笔记本电脑无法连接到S3 [英] Why Zeppelin notebook is not able to connect to S3

查看:362
本文介绍了为什么齐柏林笔记本电脑无法连接到S3的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经安装了齐柏林,在我的AWS EC2的机器连接到我的火花集群。

I have installed Zeppelin, on my aws EC2 machine to connect to my spark cluster.

星火版本:
独立:火花1.2.1彬hadoop1.tgz

我能够连接到集群的火花,但得到以下错误,当试图访问S3文件中的用例我

I am able to connect to spark cluster but getting following error, when trying to access the file in S3 in my usecase.

code:

    sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "YOUR_KEY_ID")
    sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","YOUR_SEC_KEY")
    val file = "s3n://<bucket>/<key>"
    val data = sc.textFile(file)
    data.count


file: String = s3n://<bucket>/<key>
data: org.apache.spark.rdd.RDD[String] = s3n://<bucket>/<key> MappedRDD[1] at textFile at <console>:21
ava.lang.NoSuchMethodError: org.jets3t.service.impl.rest.httpclient.RestS3Service.<init>(Lorg/jets3t/service/security/AWSCredentials;)V
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:55)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)

我有以下命令建立飞艇:

I have build the Zeppelin by following command:

mvn clean package -Pspark-1.2.1 -Dhadoop.version=1.0.4 -DskipTests

当我试图建立使用Hadoop配置文件-Phadoop-1.0.4,这是给它警告称,不存在。

when I trying to build with hadoop profile "-Phadoop-1.0.4", it is giving warning that it doesn't exist.

我也尝试-Phadoop-1在火花网站。但得到了同样的错误。
的1.x到2.1.x的Hadoop的1

I have also tried -Phadoop-1 mentioned in this spark website. but got the same error. 1.x to 2.1.x hadoop-1

请让我知道我在这里失踪。

Please let me know what I am missing here.

推荐答案

下面的安装工作对我来说(也花了很多天的问题):

The following installation worked for me (spent also many days with the problem):


  1. 星火1.3.1 prebuild Hadoop的2.3设置EC2集群上

  1. Spark 1.3.1 prebuild for Hadoop 2.3 setup on EC2-cluster

git的克隆 https://github.com/apache/incubator-zeppelin.git (日期:2015年7月25日)

git clone https://github.com/apache/incubator-zeppelin.git (date: 25.07.2015)

通过以下命令安装飞艇(属于指令对 https://github.com/阿帕奇/孵化器 - 齐柏林):

installed zeppelin via the following command (belonging to instructions on https://github.com/apache/incubator-zeppelin):

MVN清洁套装-Pspark-1.3 -Dhadoop.version = 2.3.0 -Phadoop-2.3 -DskipTests

端口更改为8082(星火使用8080端口)

Port change via "conf/zeppelin-site.xml" to 8082 (Spark uses Port 8080)

在此安装步骤我的笔记本采用S3文件的工作:

After this installation steps my notebook worked with S3 files:

sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "xxx")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","xxx")
val file = "s3n://<<bucket>>/<<file>>"
val data = sc.textFile(file)
data.first

我认为,S3问题不在齐柏林0.5.0版彻底解决,因此克隆的实际混帐回购这样做都是为了我。

I think that the S3 problem is not resolved completely in Zeppelin Version 0.5.0, so cloning the actual git-repo did it for me.

重要信息:作业只为我工作用的飞艇火花间preTER设置主=本地[*] (而不是使用火花://主:7777)

Important Information: The job only worked for me with zeppelin spark-interpreter setting master=local[*] (instead of using spark://master:7777)

这篇关于为什么齐柏林笔记本电脑无法连接到S3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆