在同一线程上运行的所有OpenMP任务 [英] All OpenMP Tasks running on the same thread

查看:69
本文介绍了在同一线程上运行的所有OpenMP任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用OpenMP中的任务编写了递归并行函数.虽然它给了我正确的答案并可以很好地运行,但我认为并行性存在问题.与串行解决方案相比,运行时无法解决我没有任务解决的其他并行问题.当为任务打印每个线程时,它们都在线程0上运行.我正在Visual Studio Express 2013上编译并运行.

I have wrote a recursive parallel function using tasks in OpenMP. While it gives me the correct answer and runs fine I think there is an issue with the parallelism.The run-time in comparison with a serial solution does not scale in the same other parallel problem I have solved without tasks have. When printing each thread for the tasks they are all running on thread 0. I am compiling and running on Visual Studio Express 2013.

int parallelOMP(int n)
{

    int a, b, sum = 0;
    int alpha = 0, beta = 0;

    for (int k = 1; k < n; k++)
    {

        a = n - (k*(3 * k - 1) / 2);
        b = n - (k*(3 * k + 1) / 2);


        if (a < 0 && b < 0)
            break;


        if (a < 0)
            alpha = 0;

        else if (p[a] != -1)
            alpha = p[a];

        if (b < 0)
            beta = 0;

        else if (p[b] != -1)
            beta = p[b];


        if (a > 0 && b > 0 && p[a] == -1 && p[b] == -1)
        {
            #pragma omp parallel
            {
                #pragma omp single
                {
                    #pragma omp task shared(p), untied
                    {
                        cout << omp_get_thread_num();
                        p[a] = parallelOMP(a);
                    }
                    #pragma omp task shared(p), untied
                    {
                        cout << omp_get_thread_num();
                        p[b] = parallelOMP(b);
                    }
                    #pragma omp taskwait
                }
            }

            alpha = p[a];
            beta = p[b];
        }

        else if (a > 0 && p[a] == -1)
        {
            #pragma omp parallel
            {
                #pragma omp single
                {
                    #pragma omp task shared(p), untied
                    {
                        cout << omp_get_thread_num();
                        p[a] = parallelOMP(a);
                    }

                    #pragma omp taskwait
                }
            }

            alpha = p[a];
        }

        else if (b > 0 && p[b] == -1)
        {
            #pragma omp parallel
            {
                #pragma omp single
                {
                    #pragma omp task shared(p), untied
                    {
                        cout << omp_get_thread_num();
                        p[b] = parallelOMP(b);
                    }

                    #pragma omp taskwait
                }
            }

            beta = p[b];
        }


        if (k % 2 == 0)
            sum += -1 * (alpha + beta);
        else
            sum += alpha + beta;


    }

    if (sum > 0)
        return sum%m;
    else
        return (m + (sum % m)) % m;
}

推荐答案

实际问题:

您正在使用Visual Studio 2013.

You are using Visual Studio 2013.

Visual Studio从未支持过2.0以上的OMP版本(请参见此处 ).

Visual Studio has never supported OMP versions beyond 2.0 (see here).

OMP任务是OMP 3.0的功能(请参见规范).

OMP Tasks are a feature of OMP 3.0 (see spec).

Ergo,完全使用VS意味着您没有OMP任务.

Ergo, using VS at all means no OMP tasks for you.

如果OMP任务是必不可少的要求,请使用其他编译器.如果不是必需的OMP,则应考虑使用备用并行任务处理库. Visual Studio包括MS并发运行时,以及在其上构建的并行模式库最重要的由于我使用VS进行工作,所以我最近从OMP转到了PPL.它不是临时替代品,但功能强大.

If OMP Tasks are an essential requirement, use a different compiler. If OMP is not an essential requirement, you should consider an alternative parallel task handling library. Visual Studio includes the MS Concurrency Runtime, and the Parallel Patterns Library built on top of it. I have recently moved from OMP to PPL due to the fact I'm using VS for work; it isn't quite a drop-in replacement but it is quite capable.

我第二次尝试解决此问题的方法,由于历史原因再次保留:

因此,问题几乎可以肯定是您在omp parallel区域之外定义了omp task.

So, the problem is almost certainly that you're defining your omp tasks outside of a omp parallel region.

这是一个人为的例子:

void work()
{
    #pragma omp parallel
    {
        #pragma omp single nowait
        for (int i = 0; i < 5; i++)
        {
            #pragma omp task untied
            {
                std::cout << 
                    "starting task " << i << 
                    " on thread " << omp_get_thread_num() << "\n";

                sleep(1);
            }
        }
    }
}

如果省略parallel声明,则作业将按顺序运行:

If you omit the parallel declaration, the job runs serially:

starting task 0 on thread 0
starting task 1 on thread 0
starting task 2 on thread 0
starting task 3 on thread 0
starting task 4 on thread 0

但是,如果您将其留在:

But if you leave it in:

starting task starting task 3 on thread 1
starting task 0 on thread 3
2 on thread 0
starting task 1 on thread 2
starting task 4 on thread 2

成功,并以正确的方式滥用共享输出资源.

Success, complete with authentic misuse of shared output resources.

(作为参考,如果省略single声明,则每个线程将运行该循环,导致在4个cpu VM上运行20个任务).

(for reference, if you omit the single declaration, each thread will run the loop, resulting in 20 tasks being run on my 4 cpu VM).

以下是完整的原始答案,但不再适用!

在每种情况下,您的omp task都是一件简单的事情.它可能会运行并立即完成:

In every case, your omp task is a single, simple thing. It probably runs and completes immediately:

#pragma omp task shared(p), untied
cout << omp_get_thread_num();

#pragma omp task shared(p), untied
cout << omp_get_thread_num();

#pragma omp task shared(p), untied
cout << omp_get_thread_num();

#pragma omp task shared(p), untied
cout << omp_get_thread_num();

因为在启动下一个任务之前永远不会启动一个长时间运行的任务,所以所有内容都可能在第一个分配的线程上运行.

Because you never start one long-running task before firing off the next task, everything will probably run on the first allocated thread.

也许您打算做这样的事情?

Perhaps you meant to do something like this?

if (a > 0 && b > 0 && p[a] == -1 && p[b] == -1)
{
    #pragma omp task shared(p), untied
    {
        cout << omp_get_thread_num();
        p[a] = parallelOMP(a);
    }

    #pragma omp task shared(p), untied
    {
        cout << omp_get_thread_num();
        p[b] = parallelOMP(b);
    }

    #pragma omp taskwait

    alpha = p[a];
    beta = p[b];
}

这篇关于在同一线程上运行的所有OpenMP任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆