如何阿帕奇星火了解HDFS数据节点? [英] How does Apache Spark know about HDFS data nodes?
问题描述
想象一下,我做在HDFS承载的文件某些SPARC操作。事情是这样的:
Imagine I do some Sparc operations on a file hosted in HDFS. Something like this:
var file = sc.textFile("hdfs://...")
val items = file.map(_.split('\t'))
...
由于在Hadoop的世界code应该去那里的数据是,对不对?
Because in the Hadoop world the code should go where the data is, right?
所以我的问题是:如何做的星火工人 HDFS数据节点都知道?如何星火知道哪个数据节点执行code?
So my question is: How do Spark workers know of HDFS data nodes? How does Spark know on which Data Nodes to execute the code?
推荐答案
星火重用Hadoop的类:当你调用文本文件
,它创建一个<一个href=\"https://hadoop.apache.org/docs/current/api/org/apache/hadoop/ma$p$pduce/lib/input/TextInputFormat.html\">TextInputFormat其中有一个 getSplits
办法(分割大致分区或块),然后每个<一个href=\"https://hadoop.apache.org/docs/current/api/org/apache/hadoop/ma$p$pduce/InputSplit.html\">InputSplit有的getLocations
和 getLocationInfo
方法。
Spark reuses Hadoop classes: when you call textFile
, it creates a TextInputFormat which has a getSplits
method (a split is roughly a partition or block), and then each InputSplit has getLocations
and getLocationInfo
method.
这篇关于如何阿帕奇星火了解HDFS数据节点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!