spark-shell错误:方案没有文件系统:wasb [英] spark-shell error : No FileSystem for scheme: wasb

查看:188
本文介绍了spark-shell错误:方案没有文件系统:wasb的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们在Azure中运行了HDInsight群集,但是在创建群集时不允许其加速边缘/网关节点.因此,我通过安装来创建此边缘/网关节点

We have HDInsight cluster in Azure running, but it doesn't allow to spin up edge/gateway node at the time of cluster creation. So I was creating this edge/gateway node by installing

echo 'deb http://private-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.2.0 HDP main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14 HDP-UTILS main'  >> /etc/apt/sources.list.d/HDP.list
echo 'deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/azurecore/ trusty main' >> /etc/apt/sources.list.d/azure-public-trusty.list
gpg --keyserver pgp.mit.edu --recv-keys B9733A7A07513CAD
gpg -a --export 07513CAD | apt-key add -
gpg --keyserver pgp.mit.edu --recv-keys B02C46DF417A0893
gpg -a --export 417A0893 | apt-key add -
apt-get -y install openjdk-7-jdk
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
apt-get -y install hadoop hadoop-hdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl libhdfs0 liblzo2-2 liblzo2-dev hadoop-lzo phoenix hive hive-hcatalog tez mysql-connector-java* oozie oozie-client sqoop flume flume-agent spark spark-python spark-worker spark-yarn-shuffle

然后我复制了/usr/lib/python2.7/dist-packages/hdinsight_common/ /usr/share/java/ /usr/lib/hdinsight-datalake/ /etc/spark/conf/ /etc/hadoop/conf/

Then I copied /usr/lib/python2.7/dist-packages/hdinsight_common/ /usr/share/java/ /usr/lib/hdinsight-datalake/ /etc/spark/conf/ /etc/hadoop/conf/

但是当我运行spark-shell时,出现以下错误

But when I run spark-shell I get following error

java.io.IOException: No FileSystem for scheme: wasb

这里是完整堆栈 https://gist.github.com/anonymous/ebb6c9d71865c9c8e125aadbbdd6a5bc

我不确定这里缺少哪个软件包/罐子.

I am not sure which package/jar is missing here.

有人知道我在做什么错吗?

Anyone has any clue what I am doing wrong ?

谢谢

推荐答案

在spark-shell中设置Azure存储(wasb和wasbs文件)的另一种方法是:

Another way of setting Azure Storage (wasb and wasbs files) in spark-shell is:

  1. 在spark安装的./jars目录中复制azure-storage和hadoop-azure jar.
  2. 使用参数-jars [带有这些jar的路由的逗号分隔列表]运行spark-shell示例:

  1. Copy azure-storage and hadoop-azure jars in the ./jars directory of spark installation.
  2. Run the spark-shell with the parameters —jars [a comma separated list with routes to those jars] Example:


$ bin/spark-shell --master "local[*]" --jars jars/hadoop-azure-2.7.0.jar,jars/azure-storage-2.0.0.jar

  • 将以下行添加到Spark上下文:

  • Add the following lines to the Spark Context:

    
    sc.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
    sc.hadoopConfiguration.set("fs.azure.account.key.my_account.blob.core.windows.net", "my_key")
    

  • 运行一个简单的查询:

  • Run a simple query:

    
    sc.textFile("wasb://my_container@my_account_host/myfile.txt").count()
    

  • 享受:)
  • 使用此设置,您可以轻松地设置一个Spark应用程序,将参数传递到当前Spark上下文上的'hadoopConfiguration'

    With this settings you could easily could setup a Spark application, passing the parameters to the 'hadoopConfiguration' on the current Spark Context

    这篇关于spark-shell错误:方案没有文件系统:wasb的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆