AWS EMR星火[否模块命名pyspark" [英] AWS EMR Spark "No Module named pyspark"

查看:237
本文介绍了AWS EMR星火[否模块命名pyspark"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个火花集群,SSH到主,并启动外壳:

I created a spark cluster, ssh into the master, and launch the shell:

MASTER=yarn-client ./spark/bin/pyspark

当我做到以下几点:

x = sc.textFile("s3://location/files.*")
xt = x.map(lambda x: handlejson(x))
table= sqlctx.inferSchema(xt)

我收到以下错误:

Error from python worker:
  /usr/bin/python: No module named pyspark
PYTHONPATH was:
  /mnt1/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/filecache/11/spark-assembly-1.1.0-hadoop2.4.0.jar
java.io.EOFException
        java.io.DataInputStream.readInt(DataInputStream.java:392)
        org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:151)
        org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:78)
        org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:54)
        org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:97)
        org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)

我还检查PYTHONPATH

 >>> os.environ['PYTHONPATH'] '/home/hadoop/spark/python/lib/py4j-0.8.2.1-src.zip:/home/hadoop/spark/python/:/‌​home/hadoop/spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar'

并看着罐子pyspark里面,它的存在:

jar -tf /home/hadoop/spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar | grep pyspark
pyspark/
pyspark/shuffle.py
pyspark/resultiterable.py
pyspark/files.py
pyspark/accumulators.py
pyspark/sql.py
pyspark/java_gateway.py
pyspark/join.py
pyspark/serializers.py
pyspark/shell.py
pyspark/rddsampler.py
pyspark/rdd.py
....

有没有人遇到过?谢谢!

Has anyone run into this before? Thanks!

推荐答案

您将要参考这些星火问题:

You'll want to reference these Spark issues:

  • https://issues.apache.org/jira/browse/SPARK-3008
  • https://issues.apache.org/jira/browse/SPARK-1520

解决方案(假设你宁愿不重建罐子):

The solution (assuming you would rather not rebuild your jar):

unzip -d foo spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar
cd foo
# if you don't have openjdk 1.6:
# yum install -y java-1.6.0-openjdk-devel.x86_64
/usr/lib/jvm/openjdk-1.6.0/bin/jar cvmf META-INF/MANIFEST ../spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar .
# don't neglect the dot at the end of that command

这篇关于AWS EMR星火[否模块命名pyspark"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆