为什么不使用MPI实现Hadoop? [英] Why isn't Hadoop implemented using MPI?

查看:192
本文介绍了为什么不使用MPI实现Hadoop?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如果我错了,纠正我,但我的理解是Hadoop不使用MPI进行不同节点间的通信。



这有什么技术原因?

我可能会冒险进行一些猜测,但我不太了解MPI如何在底层下实现以了解我是否正确。

想想看,我并不完全熟悉Hadoop的内部。我理解概念层面的框架(map / combine / shuffle / reduce,以及它在高层如何工作),但我不知道实际的细节。我一直认为Hadoop正在传输序列化的数据结构(可能是 GPB )通过TCP连接,例如在混洗阶段。让我知道这是不是真的。Hadoop / map-reduce的一大特点就是容错性。大多数(任何?)当前的MPI实现都不支持容错功能。正在考虑未来版本的 OpenMPI

Sandia实验室有一个map-减少使用MPI,但缺乏容错性。

Correct me if I'm wrong, but my understanding is that Hadoop does not use MPI for communication between different nodes.

What are the technical reasons for this?

I could hazard a few guesses, but I do not know enough of how MPI is implemented "under the hood" to know whether or not I'm right.

Come to think of it, I'm not entirely familiar with Hadoop's internals either. I understand the framework at a conceptual level (map/combine/shuffle/reduce and how that works at a high level) but I don't know the nitty gritty implementation details. I've always assumed Hadoop was transmitting serialized data structures (perhaps GPBs) over a TCP connection, eg during the shuffle phase. Let me know if that's not true.

解决方案

One of the big features of Hadoop/map-reduce is the fault tolerance. Fault tolerance is not supported in most (any?) current MPI implementations. It is being thought about for future versions of OpenMPI.

Sandia labs has a version of map-reduce which uses MPI, but it lacks fault tolerance.

这篇关于为什么不使用MPI实现Hadoop?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆