您使用哪些并行编程API? [英] Which parallel programming APIs do you use?

查看:63
本文介绍了您使用哪些并行编程API?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

考虑到当今多核和多处理硬件的巨大重要性,试图掌握人们目前实际如何编写并行代码.在我看来,主要的范例是pthreads(POSIX线程),它在Linux上是本机的,而在Windows上可用. HPC人们倾向于使用OpenMP或MPI,但是在StackOverflow上似乎并不多.还是您依赖Java线程,Windows线程API等,而不是可移植的标准?您认为进行并行编程的推荐方式是什么?

Trying to get a grip on how people are actually writing parallel code currently, considering the immense importance of multicore and multiprocessing hardware these days. To me, it looks like the dominant paradigm is pthreads (POSIX threads), which is native on Linux and available on Windows. HPC people tend to use OpenMP or MPI, but there are not many of these here on StackOverflow it seems. Or do you rely on Java threading, Windows threading APIs, etc. rather than the portable standards? What is the recommended way, in your opinion, to do parallel programming?

或者您正在使用更多奇特的东西,例如Erlang,CUDA,RapidMind,CodePlay,Oz,甚至是旧的Occam?

Or are you using more exotic things like Erlang, CUDA, RapidMind, CodePlay, Oz, or even dear old Occam?

说明:我正在寻找可移植性强且适用于各种主机架构上的平台(例如Linux,各种UNIX)的解决方案. Windows是一种罕见的情况,很好支持.因此C#和.net的范围实在太狭窄了,CLR是一项很酷的技术,但是他们可以为Linux主机发行它吗,以便它像JVM,Python,Erlang或任何其他可移植语言一样流行.

Clarification: I am looking for solutions that are quite portable and applicable to platforms such as Linux, various unixes, on various host architectures. Windows is a rare case that is nice to support. So C# and .net are really too narrow here, the CLR is a cool piece of technology but could they PLEASE release it for Linux host so that it would be as prevalent as say the JVM, Python, Erlang, or any other portable language.

C ++或基于JVM的:可能是C ++,因为JVM往往会掩盖性能.

C++ or JVM-based: probably C++, since JVMs tend to hide performance.

MPI:我也同意,即使HPC人士也将其视为一种难以使用的工具-但对于在128000处理器上运行的应用程序,它是解决不适用map/reduce的问题的唯一可扩展解决方案.但是,消息传递具有很高的优雅度,因为它是唯一一种可以真正很好地扩展到本地内存/AMP,共享内存/SMP,分布式运行时环境的编程样式.

MPI: I would agree that even the HPC people see it as a hard to use tool -- but for running on 128000 processors, it is the only scalable solution for the problems where map/reduce do not apply. Message-passing has great elegance, though, as it is the only programming style that seems to scale really well to local memory/AMP, shared memory/SMP, distributed run-time environments.

MCAPI 是一个有趣的新竞争者.但我认为还没有人有时间对此进行任何实践经验.

An interesting new contender is the MCAPI. but I do not think anyone has had time to have any practical experience with that yet.

因此,总的来说,情况似乎是存在许多我不知道的有趣的Microsoft项目,并且Windows API或pthread是实践中最常见的实现.

So overall, the situation seems to be that there are a lot of interesting Microsoft projects that I did not know about, and that Windows API or pthreads are the most common implementations in practice.

推荐答案

MPI并不像大多数人想象的那么难.如今,我认为多范例方法最适合并行和分布式应用程序.使用MPI进行节点间的通信和同步,并使用OpenMP或PThreads进行更精细的并行化.为每台机器考虑MPI,为每个核心考虑OpenMP或PThread.这似乎比在不久的将来为每个内核生成一个新的MPI Proc更好.

MPI isn't as hard as most make it seem. Nowadays I think a multi-paradigm approach is best suited for parallel and distributed applications. Use MPI for your node to node communication and synchronization and either OpenMP or PThreads for your more granular parallelization. Think MPI for each machine, and OpenMP or PThreads for each core. This would seem to scale a little bit better than spawning a new MPI Proc for each core for the near future.

也许现在是双核或四核,在机器上为每个核生成proc不会有那么大的开销,但是随着我们在每台机器上处理越来越多的核而缓存和裸片内存无法扩展的情况下,同样,使用共享内存模型会更合适.

Perhaps for dual or quad core right now, spawning a proc for each core on a machine won't have that much overhead, but as we approach more and more cores per machine where the cache and on die memory aren't scaling as much, it would be more appropriate to use a shared memory model.

这篇关于您使用哪些并行编程API?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆