如何用OSError:[WinError 123]修复pyspark NLTK错误? [英] How to fix pyspark NLTK Error with OSError: [WinError 123]?

查看:95
本文介绍了如何用OSError:[WinError 123]修复pyspark NLTK错误?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我将RDD转换为DataFrame时,出现了意外错误:

  import nltk 
from nltk import pos_tag
my_rdd_of_lists = df_removed.select(已删除).rdd.map(lambda x:nltk.pos_tag(x))
my_df = spark.createDataFrame(my_rdd_of_lists)

当我调用nltk函数od rdd时,总是出现此错误。当我使用任何numpy方法创建此行时,它没有失败。



错误代码:

  Py4JJavaError:调用z时发生错误:org.apache.spark.api.python.PythonRDD.runJob。 
:org.apache.spark.SparkException:由于阶段失败而导致作业中止:阶段14.0中的任务0失败1次,最近一次失败:阶段14.0中的任务0.0丢失(TID 323,本地主机,执行者驱动程序):组织.apache.spark.api.python.PythonException:追溯(最近一次呼叫过去):

/ p>

  OSError:[WinError 123] Nazwa pliku,nazwa katalogu lubskładniaetykiety woluminu jest niepoprawna:'C:\\C: \\用户\\Olga\\桌面\\Spark\\spark-2.4.5-bin-hadoop2.7\\jars\\spark-core_2.11- 2.4.5.jar'

所以这是我不知道如何解决的部分。我以为这是环境变量的问题,但似乎一切正常:

  SPARK HOME:C:\用户\Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7 

我还打印了sys.path:

 为sys.path中的i导入sys 

print(i)

得到:

  C:\Users\Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7\python 
C:\ \用户\Olga\AppData\本地\Temp\spark-22c0eb38-fcc0-4f1f-b8dd-af83e15d342c\userFiles-3195dcc7-0fc6-469f-9afc-7752510f2471
C:\用户\ \Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip
C:\Users\Olga
C:\Users\Olga\Anaconda3\python37.zip
C:\Users\Olga\Anaconda3\DLLs
C:\Users\Olga \Anaconda3\lib
C:\用户\Olga\Anaconda3

C:\用户\Olga\Anaconda3\lib\站点软件包
C:\Users\Olga\Anaconda3\lib\site-packages\win 32
C:\用户\Olga\Anaconda3\lib\站点软件包\win32\lib
C:\用户\Olga\Anaconda3\lib\ site-packages\Pythonwin
C:\Users\Olga\Anaconda3\lib\site-packages\IPython\扩展名
C:\Users\Olga\。 ipython

这里对我来说一切都还不错。
请帮助,我不知道该怎么办。代码的早期部分正在运行,没有任何错误。

解决方案

似乎应该用软件包运行nltk来运行它吗? / p>

我用 pip 卸载了nltk,pandas和numpy,然后我做了相同的操作,但使用了 conda



此后,我列出了我的软件包,发现了一个奇怪的叫做软件包的软件包,它似乎是个错误,叫做 -umpy。 / p>

我什至无法卸载它-没有命令提示符,也没有Anaconda导航器。因此,我只是在计算机上的文件中找到了它,然后将其删除。然后我再次安装了nltk。



此后它开始正常工作,并且没有出现错误。


I got an unexcpected error when I run transforming RDD to DataFrame:

import nltk
from nltk import pos_tag
my_rdd_of_lists = df_removed.select("removed").rdd.map(lambda x: nltk.pos_tag(x))
my_df = spark.createDataFrame(my_rdd_of_lists)

This error appears always when I call nltk function od rdd. When I made this line with any numpy method, it did not fail.

Error code:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 323, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):

And

OSError: [WinError 123] Nazwa pliku, nazwa katalogu lub składnia etykiety woluminu jest niepoprawna: 'C:\\C:\\Users\\Olga\\Desktop\\Spark\\spark-2.4.5-bin-hadoop2.7\\jars\\spark-core_2.11-2.4.5.jar'

So here is the part I don't know how to resolve. I thought that it is the problem with environment variables, but it seems there is everything ok:

SPARK HOME: C:\Users\Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7

I've also printed my sys.path:

import sys
for i in sys.path:
    print(i) 

And got:

C:\Users\Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7\python
C:\Users\Olga\AppData\Local\Temp\spark-22c0eb38-fcc0-4f1f-b8dd-af83e15d342c\userFiles-3195dcc7-0fc6-469f-9afc-7752510f2471
C:\Users\Olga\Desktop\Spark\spark-2.4.5-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip
C:\Users\Olga
C:\Users\Olga\Anaconda3\python37.zip
C:\Users\Olga\Anaconda3\DLLs
C:\Users\Olga\Anaconda3\lib
C:\Users\Olga\Anaconda3

C:\Users\Olga\Anaconda3\lib\site-packages
C:\Users\Olga\Anaconda3\lib\site-packages\win32
C:\Users\Olga\Anaconda3\lib\site-packages\win32\lib
C:\Users\Olga\Anaconda3\lib\site-packages\Pythonwin
C:\Users\Olga\Anaconda3\lib\site-packages\IPython\extensions
C:\Users\Olga\.ipython

Here also everything looks ok for me. Please help, I don't know what to do. Earlier parts of codes were running without any error. Should I install nltk in any other way to run it with spark?

解决方案

It seems that it was some problem with packages.

I uninstalled nltk, pandas and numpy with pip and then I did the same but with conda.

After that I listed my packages and found one weird called package that seemed to be a bug, called "-umpy".

I could not even uninstall it - no with command prompt, neither with Anaconda navigator. So I just found it in files on my computer and removed. Then I installed nltk once again.

After that it started working correctly and bug did not appear.

这篇关于如何用OSError:[WinError 123]修复pyspark NLTK错误?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆