PySpark:无法创建SparkSession.(Java网关错误) [英] PySpark: Not able to create SparkSession.(Java Gateway Error)

查看:163
本文介绍了PySpark:无法创建SparkSession.(Java网关错误)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在Windows上安装了PySpark,直到昨天都没问题.我正在使用 windows 10 PySpark版本2.3.3(预构建版本) java版本"1.8.0_201" .昨天,当我尝试创建Spark会话时,遇到了以下错误.

 异常回溯(最后一次调用最近)< ipython-input-2-a9ef4ac1a07d>在< module>中---->1 spark = SparkSession.builder.appName("Hello").master("local").getOrCreate()在getOrCreate(self)中的C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ sql \ session.py密钥的171,self._options.items()中的值:172 spark.conf.set(键,值)->第173章174#此SparkContext可能是现有的.密钥为175,self._options.items()中的值:在getOrCreate(cls,conf)中的C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ context.py带有SparkContext._lock的361:362如果SparkContext._active_spark_context为None:->第363章364返回SparkContext._active_spark_context365__init__中的C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ context.py(自身,master,appName,sparkHome,pyFiles,环境,batchSize,序列化器,conf,网关,jsc,profiler_cls)127注意此选项将在Spark 3.0中删除")128->129 SparkContext._ensure_initialized(自己,网关=网关,conf = conf)130试试:131 self._do_init(master,appName,sparkHome,pyFiles,environment,batchSize,序列化程序,_ensure_initialized中的C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ context.py(cls,实例,网关,conf)310与SparkContext._lock:311(如果不是SparkContext._gateway):->312 =========================================================================================================================================================)313 =========================================================================================================================================314C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ java_gateway.py在launch_gateway(conf)中44:返回:JVM网关45"--->46 return _launch_gateway(conf)4748_launch_gateway中的C:\ spark-2.3.3-bin-hadoop2.7 \ python \ pyspark \ java_gateway.py(conf,不安全)106107如果不是os.path.isfile(conn_info_file):->108引发异常("Java网关进程在发送其端口号之前已退出")109110,其中open(conn_info_file,"rb")作为信息:例外:Java网关进程在发送其端口号之前已退出 

我确实检查了github上的pyspark问题以及实现相同的stackoverflow答案,但问题没有解决.

我确实尝试了以下方法:

1.)尝试卸载,安装和更改Java安装目录.目前,我的java安装目录是 C:/Java/.

I have installed PySpark on windows and was having no problem till yesterday. I am using windows 10, PySpark version 2.3.3(Pre-build version), java version "1.8.0_201". Yesterday when I tried creating a spark session, I ran into below error.

Exception                                 Traceback (most recent call last)
<ipython-input-2-a9ef4ac1a07d> in <module>
----> 1 spark = SparkSession.builder.appName("Hello").master("local").getOrCreate()

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\sql\session.py in getOrCreate(self)
    171                     for key, value in self._options.items():
    172                         sparkConf.set(key, value)
--> 173                     sc = SparkContext.getOrCreate(sparkConf)
    174                     # This SparkContext may be an existing one.
    175                     for key, value in self._options.items():

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\context.py in getOrCreate(cls, conf)
    361         with SparkContext._lock:
    362             if SparkContext._active_spark_context is None:
--> 363                 SparkContext(conf=conf or SparkConf())
    364             return SparkContext._active_spark_context
    365 

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    127                     " note this option will be removed in Spark 3.0")
    128 
--> 129         SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
    130         try:
    131             self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\context.py in _ensure_initialized(cls, instance, gateway, conf)
    310         with SparkContext._lock:
    311             if not SparkContext._gateway:
--> 312                 SparkContext._gateway = gateway or launch_gateway(conf)
    313                 SparkContext._jvm = SparkContext._gateway.jvm
    314 

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\java_gateway.py in launch_gateway(conf)
     44     :return: a JVM gateway
     45     """
---> 46     return _launch_gateway(conf)
     47 
     48 

C:\spark-2.3.3-bin-hadoop2.7\python\pyspark\java_gateway.py in _launch_gateway(conf, insecure)
    106 
    107             if not os.path.isfile(conn_info_file):
--> 108                 raise Exception("Java gateway process exited before sending its port number")
    109 
    110             with open(conn_info_file, "rb") as info:

Exception: Java gateway process exited before sending its port number

I did check out the pyspark issues on github as well as stackoverflow answers realted to the same but the issue is not resolved.

I did try out the below methods:

1.) Tried uninstalling, installing and Changing the java installation directory. Currently, my java installation directory is C:/Java/ . Pyspark: Exception: Java gateway process exited before sending the driver its port number

2.) Tried setting PYSPARK_SUBMIT_ARGS, but of no help.

Please suggest me the possible resolutions.

解决方案

I think you need to uninstall java and pyspark both again and then reinstall java and pyspark.

pip install pyspark

Then Go to system > advance system setting > environment variables > then edit java home in user variables > Path & system variable > Path.

这篇关于PySpark:无法创建SparkSession.(Java网关错误)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆