org.apache.spark.SparkException:pyspark.daemon 的标准输出中没有端口号 [英] org.apache.spark.SparkException: No port number in pyspark.daemon's stdout

查看:462
本文介绍了org.apache.spark.SparkException:pyspark.daemon 的标准输出中没有端口号的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在 Hadoop-Yarn 集群上执行 spark-submit 作业.

I am executing spark-submit job on Hadoop-Yarn cluster.

spark-submit/opt/spark/examples/src/main/python/pi.py 1000

spark-submit /opt/spark/examples/src/main/python/pi.py 1000

但面临以下错误消息.似乎是工人没有启动.

but facing below error message. It seems to be worker is not starting.

  2018-12-20 07:25:14 INFO  SparkContext:54 - Created broadcast 0 from    broadcast at DAGScheduler.scala:1161
  2018-12-20 07:25:14 INFO  DAGScheduler:54 - Submitting 1000 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /opt/spark/examples/src/main/python/pi.py:44) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14))
   2018-12-20 07:25:14 INFO  YarnScheduler:54 - Adding task set 0.0 with 1000 tasks
   2018-12-20 07:25:14 INFO  TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1, partition 0, PROCESS_LOCAL, 7863 bytes)
   2018-12-20 07:25:14 INFO  TaskSetManager:54 - Starting task 1.0 in stage 0.0 (TID 1, hadoop-slave1, executor 2, partition 1, PROCESS_LOCAL, 7863 bytes)
  2018-12-20 07:25:15 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave2:37217 (size: 4.2 KB, free: 93.3 MB)
  2018-12-20 07:25:15 INFO  BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on hadoop-slave1:35311 (size: 4.2 KB, free: 93.3 MB)
2018-12-20 07:25:15 INFO  TaskSetManager:54 - Starting task 2.0 in stage 0.0 (TID 2, hadoop-slave2, executor 1, partition 2, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:15 INFO  TaskSetManager:54 - Starting task 3.0 in stage 0.0 (TID 3, hadoop-slave1, executor 2, partition 3, PROCESS_LOCAL, 7863 bytes)
2018-12-20 07:25:16 WARN  TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, hadoop-slave2, executor 1): org.apache.spark.SparkException: 
Error from python worker:
Traceback (most recent call last):
File "/usr/lib64/python2.6/runpy.py", line 104, in _run_module_as_main
  loader, code, fname = _get_module_details(mod_name)
File "/usr/lib64/python2.6/runpy.py", line 79, in _get_module_details
  loader = get_loader(mod_name)
File "/usr/lib64/python2.6/pkgutil.py", line 456, in get_loader
  return find_loader(fullname)
File "/usr/lib64/python2.6/pkgutil.py", line 466, in find_loader
  for importer in iter_importers(fullname):
File "/usr/lib64/python2.6/pkgutil.py", line 422, in iter_importers
  __import__(pkg)
 File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/__init__.py", line 51, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/context.py", line 31, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/accumulators.py", line 97, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/serializers.py", line 71, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 246, in <module>
File "/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip/pyspark/cloudpickle.py", line 270, in CloudPickler
 NameError: name 'memoryview' is not defined
 PYTHONPATH was:
 /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/filecache/21/__spark_libs__3793296165132209773.zip/spark-core_2.11-2.4.0.jar:    /tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/pyspark.zip:/tmp/hadoop-hdfs/nm-local-dir/usercache/hdfs/appcache/application_1545288386209_0005/container_1545288386209_0005_01_000002/py4j-0.10.7-src.zip
org.apache.spark.SparkException: No port number in pyspark.daemon's stdout
at   org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:204)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:122)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:95)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at         org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:108)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:65)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)

推荐答案

我相信当 Python 版本不匹配时会发生此问题.

I believe this issue happens when there's a Python version mismatch.

将以下内容添加到我的 ~/.bash_profile 对我有用:

Adding the following to my ~/.bash_profile worked for me:

alias spark-submit='PYSPARK_PYTHON=$(which python) spark-submit'

它应该强制 spark 使用您在模块中加载的相同版本的 python.

It should force spark to use the same version of python that you have loaded in your module.

这篇关于org.apache.spark.SparkException:pyspark.daemon 的标准输出中没有端口号的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆