使用 Apache Spark 进行分布式 Web 爬网 - 有可能吗? [英] Distributed Web crawling using Apache Spark - Is it Possible?
问题描述
当我参加一次关于网络挖掘的采访时,我问了一个有趣的问题.问题是,是否可以使用 Apache Spark 抓取网站?
An interesting question asked of me when I attended one interview regarding web mining. The question was, is it possible to crawl the Websites using Apache Spark?
我猜是可以的,因为它支持Spark的分布式处理能力.面试后我搜索了这个,但找不到任何有趣的答案.Spark 可以做到吗?
I guessed that it was possible, because it supports distributed processing capacity of Spark. After the interview I searched for this, but couldn't find any interesting answer. Is that possible with Spark?
推荐答案
这种方式怎么样:
您的应用程序将获取一组网站 URL 作为您的抓取工具的输入,如果您只是实现一个普通应用程序,您可以按如下方式执行:
Your application would get a set of websites URLs as input for your crawler, if you are implementing just a normal app, you might do it as follows:
- 将所有要爬取的网页拆分成一个单独的站点列表,每个站点都足够小,可以很好地放在一个线程中:
例如:你要爬取www.example.com/news从20150301到20150401,拆分结果可以是:[www.example.com/news/20150301, www.example.com/news/20150302, ..., www.example.com/news/20150401]
- 将每个基本 url(
www.example.com/news/20150401
) 分配给单个线程,它在真正发生数据获取的线程中 - 将每个线程的结果保存到文件系统中.
- split all the web pages to be crawled into a list of separate site, each site is small enough to fit in a single thread well:
for example: you have to crawl www.example.com/news from 20150301 to 20150401, split results can be: [www.example.com/news/20150301, www.example.com/news/20150302, ..., www.example.com/news/20150401]
- assign each base url(
www.example.com/news/20150401
) to a single thread, it is in the threads where the really data fetch happens - save the result of each thread into FileSystem.
当应用程序变成一个spark的时候,同样的过程发生,但是封装在Spark的概念中:我们可以自定义一个CrawlRDD做同样的工作:
When the application become a spark one, same procedure happens but encapsulate in Spark notion: we can customize a CrawlRDD do the same staff:
- 拆分站点:
def getPartitions: Array[Partition]
是执行拆分任务的好地方. - 抓取每个拆分的线程:
def compute(part: Partition, context: TaskContext): Iterator[X]
将传播到应用程序的所有执行程序,并行运行. - 将 rdd 保存到 HDFS.
- Split sites:
def getPartitions: Array[Partition]
is a good place to do the split task. - Threads to crawl each split:
def compute(part: Partition, context: TaskContext): Iterator[X]
will be spread to all the executors of your application, run in parallel. - save the rdd into HDFS.
最终的程序如下:
class CrawlPartition(rddId: Int, idx: Int, val baseURL: String) extends Partition {}
class CrawlRDD(baseURL: String, sc: SparkContext) extends RDD[X](sc, Nil) {
override protected def getPartitions: Array[CrawlPartition] = {
val partitions = new ArrayBuffer[CrawlPartition]
//split baseURL to subsets and populate the partitions
partitions.toArray
}
override def compute(part: Partition, context: TaskContext): Iterator[X] = {
val p = part.asInstanceOf[CrawlPartition]
val baseUrl = p.baseURL
new Iterator[X] {
var nextURL = _
override def hasNext: Boolean = {
//logic to find next url if has one, fill in nextURL and return true
// else false
}
override def next(): X = {
//logic to crawl the web page nextURL and return the content in X
}
}
}
}
object Crawl {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("Crawler")
val sc = new SparkContext(sparkConf)
val crdd = new CrawlRDD("baseURL", sc)
crdd.saveAsTextFile("hdfs://path_here")
sc.stop()
}
}
这篇关于使用 Apache Spark 进行分布式 Web 爬网 - 有可能吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!