从pyspark读取hdfs中的文件 [英] reading a file in hdfs from pyspark
本文介绍了从pyspark读取hdfs中的文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试读取我的hdfs中的文件.这是我的hadoop文件结构的显示.
I'm trying to read a file in my hdfs. Here's a showing of my hadoop file structure.
hduser@GVM:/usr/local/spark/bin$ hadoop fs -ls -R /
drwxr-xr-x - hduser supergroup 0 2016-03-06 17:28 /inputFiles
drwxr-xr-x - hduser supergroup 0 2016-03-06 17:31 /inputFiles/CountOfMonteCristo
-rw-r--r-- 1 hduser supergroup 2685300 2016-03-06 17:31 /inputFiles/CountOfMonteCristo/BookText.txt
这是我的pyspark代码:
Here's my pyspark code:
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("myFirstApp").setMaster("local")
sc = SparkContext(conf=conf)
textFile = sc.textFile("hdfs://inputFiles/CountOfMonteCristo/BookText.txt")
textFile.first()
我得到的错误是:
Py4JJavaError: An error occurred while calling o64.partitions.
: java.lang.IllegalArgumentException: java.net.UnknownHostException: inputFiles
这是因为我未正确设置sparkContext吗?我正在通过虚拟机在ubuntu 14.04虚拟机中运行它.
Is this because I'm setting my sparkContext incorrectly? I'm running this in a ubuntu 14.04 virtual machine through virtual box.
我不确定我在做什么错....
I'm not sure what I'm doing wrong here....
推荐答案
如果未提供任何配置,则可以通过完整路径访问HDFS文件.(如果hdfs位于本地环境中,则namenodehost是您的本地主机).
You could access HDFS files via full path if no configuration provided.(namenodehost is your localhost if hdfs is located in local environment).
hdfs://namenodehost/inputFiles/CountOfMonteCristo/BookText.txt
这篇关于从pyspark读取hdfs中的文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文