Spark无法为您的平台加载native-hadoop库 [英] Spark Unable to load native-hadoop library for your platform
问题描述
我是Ubuntu 16.04上的一名虚拟人物,拼命尝试让Spark工作。
我试图解决我的问题,使用在这里找到的答案在stackoverflow但我无法解决任何问题。
使用命令 ./ spark-shell
从bin文件夹启动spark我收到此消息
WARN NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类。
我使用的Java版本是
java version1.8.0_101
Java™SE运行时环境(build 1.8.0_101-b13)
Java HotSpot™64位服务器虚拟机(build 25.101-b13,混合模式)。
Spark是最新版本:2.0.1 with Hadoop 2. 7.
I'我也重试了一个较老的Spark包,1.6.2和Hadoop 2.4,但我得到了相同的结果。我也尝试在Windows上安装Spark,但似乎比在Ubuntu上做更难。
我也尝试在我的笔记本电脑上运行Spark上的一些命令:我可以定义一个对象,我可以创建RDD并将其存储在缓存中,并且可以使用像 .map()
这样的函数,但是当我尝试运行函数 .reduceByKey()
我收到几个错误消息字符串。
可能是为32位编译的Hadoop库,而我在64位?
谢谢。
/ p>
- 下载Hadoop二进制文件
- 解压到您选择的目录
- 将
HADOOP_HOME
指向该目录。$ HADOOP_HOME / lib / native LD_LIBRARY_PATH
。
I'm a dummy on Ubuntu 16.04, desperately attempting to make Spark work.
I've tried to fix my problem using the answers found here on stackoverflow but I couldn't resolve anything.
Launching spark with the command ./spark-shell
from bin folder I get this message
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable".
I'm using Java version is
java version "1.8.0_101
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode).
Spark is the latest version: 2.0.1 with Hadoop 2. 7. I've also retried with an older package of Spark, the 1.6.2 with Hadoop 2.4 but I get the same result. I also tried to install Spark on Windows but it seems harder than doing it on Ubuntu.
I also tried to run some commands on Spark from my laptop: I can define an object, I can create an RDD and store it in cache and I can use function like .map()
, but when I try to run the function .reduceByKey()
I receive several strings of error messages.
May be it's the Hadoop library which is compiled for 32bits, while I'm on 64bit?
Thanks.
Steps to fix:
- download Hadoop binaries
- unpack to directory of your choice
- set
HADOOP_HOME
to point to that directory. - add
$HADOOP_HOME/lib/native
toLD_LIBRARY_PATH
.
这篇关于Spark无法为您的平台加载native-hadoop库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!