你如何利用Multicore? [英] How are you taking advantage of Multicore?

查看:146
本文介绍了你如何利用Multicore?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

作为来自企业网络世界的 HPC 的世界里的人开发,我总是很想知道开发者如何回到现实世界中利用并行计算。现在,所有芯片都将进入多核心模式



我的问题是:









  • 我对真实故事特别感兴趣,了解多核如何影响不同的软件域,所以指定什么

  • 您使用现有代码执行的操作(例如,在服务器端,客户端应用程序,科学计算等)利用多核机器,您遇到了什么挑战?您是否使用 OpenMP Erlang Haskell CUDA TBB UPC 或其他?

  • 您打算做什么

  • 如果您的域不能轻松地从并行计算中获益,那么,如何处理数百或数千个内核?


  • 最后,我将这个框架化为一个多核问题,但随时可以讨论其他类型的并行计算。如果您要移植应用程式的一部分,以使用 MapReduce ,或者大群集上的MPI 是你的典范,然后一定也提到。



    更新:如果你的答案是#5,提到你是否认为如果有更多的核心(100,1000等)可用的内存带宽(看看每个内核带宽变得越来越小)。

    解决方案

    我的研究工作包括编译器和垃圾邮件过滤的工作。我也做了很多'个人生产力'Unix的东西。加上我写和使用软件来管理我教授的课程,包括分级,测试学生代码,跟踪成绩和无数其他琐事。


    1. 多核影响我根本不会,除非作为编译器支持其他应用程序的研究问题。但这些问题主要在运行时系统,而不是编译器。

    2. 在很大的麻烦和费用,Dave Wortman在1990年左右展示你可以并行编译器保持四个处理器忙。没有人知道我曾经重复过这个实验。 大多数编译器都足够快运行单线程。在并行的几个不同的源文件上运行顺序编译器比使编译器本身并行更容易。对于垃圾邮件过滤,学习是一个固有的顺序过程。甚至一台较旧的机器每秒可以学习数百条消息,所以即使一个大的语料库也可以在一分钟内学习。

    3. 我使用并行机的唯一重要方式是使用parallel make 。这是一个伟大的福音,并且大的构建容易并行。 Make自动完成几乎所有的工作。我唯一能记住的另一件事是使用并行性来计算长期运行的学生代码,通过把它放到一堆实验室机器,我可以做良好的良心,因为我只是凿每个机器一个核心,所以使用只有1 / 4的CPU资源。哦,我写了一个Lua脚本,当使用跛脚翻录MP3文件时将使用所有4个内核。

    4. 我会忽略数十,数百和数千个核心。第一次被告知并行机器即将到来,你必须准备好是1984年。这是真的,今天是真的,并行编程是高技能专家的域。唯一改变的是今天制造商迫使我们支付并行硬件,无论我们是否想要。 。编程模型非常糟糕,并且使线程/互斥模型工作,更不用说执行得很好了,是一种昂贵的工作,即使硬件是免费的。我期望大多数程序员忽略并行性,静静地了解他们的业务。当一个熟练的专家伴随着一个平行的或一个伟大的电脑游戏,我会安静地鼓掌和利用他们的努力。如果我想为自己的应用程序提供性能,我将专注于减少内存分配并忽略并行性。

    5. 并行性大多数域名很难并行化。

    摘要(我听说过一个主要演讲人,他是一个领先的CPU制造商):行业支持多核,因为他们不能保持机器运行更快,更热,他们不知道如何处理额外的晶体管。现在,他们迫切希望找到一种使多核获利的方法,因为如果他们没有利润,他们不能建立下一代的生产线。



    很多认真考虑并行性的人都忽略了这些玩具4核甚至是32核机器,支持128个处理器或更多的GPU。我的猜测是,真正的行动是在那里。


    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few.

    My questions are:

    1. How does this affect your software roadmap?
    2. I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc).
    3. What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else?
    4. What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores?
    5. If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too.

    Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too.

    Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    解决方案

    My research work includes work on compilers and on spam filtering. I also do a lot of 'personal productivity' Unix stuff. Plus I write and use software to administer classes that I teach, which includes grading, testing student code, tracking grades, and myriad other trivia.

    1. Multicore affects me not at all except as a research problem for compilers to support other applications. But those problems lie primarily in the run-time system, not the compiler.
    2. At great trouble and expense, Dave Wortman showed around 1990 that you could parallelize a compiler to keep four processors busy. Nobody I know has ever repeated the experiment. Most compilers are fast enough to run single-threaded. And it's much easier to run your sequential compiler on several different source files in parallel than it is to make your compiler itself parallel. For spam filtering, learning is an inherently sequential process. And even an older machine can learn hundreds of messages a second, so even a large corpus can be learned in under a minute. Again, training is fast enough.
    3. The only significant way I have of exploiting parallel machines is using parallel make. It is a great boon, and big builds are easy to parallelize. Make does almost all the work automatically. The only other thing I can remember is using parallelism to time long-running student code by farming it out to a bunch of lab machines, which I could do in good conscience because I was only clobbering a single core per machine, so using only 1/4 of CPU resources. Oh, and I wrote a Lua script that will use all 4 cores when ripping MP3 files with lame. That script was a lot of work to get right.
    4. I will ignore tens, hundreds, and thousands of cores. The first time I was told "parallel machines are coming; you must get ready" was 1984. It was true then and is true today that parallel programming is a domain for highly skilled specialists. The only thing that has changed is that today manufacturers are forcing us to pay for parallel hardware whether we want it or not. But just because the hardware is paid for doesn't mean it's free to use. The programming models are awful, and making the thread/mutex model work, let alone perform well, is an expensive job even if the hardware is free. I expect most programmers to ignore parallelism and quietly get on about their business. When a skilled specialist comes along with a parallel make or a great computer game, I will quietly applaud and make use of their efforts. If I want performance for my own apps I will concentrate on reducing memory allocations and ignore parallelism.
    5. Parallelism is really hard. Most domains are hard to parallelize. A widely reusable exception like parallel make is cause for much rejoicing.

    Summary (which I heard from a keynote speaker who works for a leading CPU manufacturer): the industry backed into multicore because they couldn't keep making machines run faster and hotter and they didn't know what to do with the extra transistors. Now they're desperate to find a way to make multicore profitable because if they don't have profits, they can't build the next generation of fab lines. The gravy train is over, and we might actually have to start paying attention to software costs.

    Many people who are serious about parallelism are ignoring these toy 4-core or even 32-core machines in favor of GPUs with 128 processors or more. My guess is that the real action is going to be there.

    这篇关于你如何利用Multicore?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆