MPI中每个进程的随机数 [英] Random Number to each Process in MPI

查看:281
本文介绍了MPI中每个进程的随机数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用MPICH2实施奇偶校验"排序. 我做了实现,但是当我随机分配给每个过程时,他的价值, 相同的数字将随机分配给所有进程.

I'm using MPICH2 to implement an "Odd-Even" Sort. I did the implementation but when I randomize to each process his value, the same number is randomized to all processes.

这是每个过程的代码,每个过程随机化其值.

Here is the code for each process, each process randomized his value..

int main(int argc,char *argv[])
{
    int  nameLen, numProcs, myID;
    char processorName[MPI_MAX_PROCESSOR_NAME];
    int myValue;

    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD,&myID);
    MPI_Comm_size(MPI_COMM_WORLD,&numProcs);    
    MPI_Get_processor_name(processorName,&nameLen);
    MPI_Status status;

    srand((unsigned)time(NULL));
    myValue = rand()%30+1; 

    cout << "myID: " << myID << " value: " << myValue<<endl;
    MPI_Finalize();

    return 0;
 }

为什么每个进程都获得相同的值?

why each process get the same value?

感谢您的回答:)

我更改了行

 srand((unsigned)time(NULL));

 srand((unsigned)time(NULL)+myID*numProcs + nameLen);

并且它为每个进程提供了不同的值:)

and it gives a different values for each process :)

推荐答案

此任务并不简单.

由于使用time(0)初始化srand(),因此获得相同的数字. time(0)的作用是返回当前秒(自历元开始).因此,如果所有进程都具有同步时钟,只要它们在同一秒调用srand(),所有进程都将使用相同的种子进行初始化,这很有可能.我什至在大型机器上也观察到了这一点.

You are getting same numbers because you initialize srand() with time(0). What time(0) does is return current second (since epoch). So if all the processes have syncronized clocks all will initialize with the same seed as long as they call srand() on the same second, which is pretty probable. I have observed this even on large machines.

解决方案1.使用局部值初始化随机种子.

我所做的是将cat /proc/meminfo/dev/random相结合的一些内存使用情况包括在计算随机种子中,这在物理机器上比时钟本地化更多.请注意,对于1台计算机上的N个任务,这仍然可能失败.但是,如果我没记错的话,我也使用了task_id.任务本地的任何内容都足够.组合东西也是个好主意.毕竟,与实际计算相比,这些计算应该非常短.而且最好还是保持安全.

What I did was to include into computing random seed some memory usage from cat /proc/meminfo combined with /dev/random, which are more local to physical machine than clocks. Note that this might still fail for N tasks on 1 machine. But if I recall correctly I also used task_id. Anything that is local to task will suffice. Combining stuff is also good idea. After all this computations should be very short compared to real computations. And its better to stay on the safe side.

解决方案2.计算种子作为预处理步骤.

您还可以使用您的方法从task 0生成随机种子,并使用send-to-all进行传播.但是,在进行大规模扩展时(例如10 ^ 5进程)可能会出现扩展麻烦.您还可以使用任何其他方法来加载参数,并仅准备种子作为预处理步骤.但是,它也涉及一些不平凡的工作.

You could also generate random seeds from task 0 using your method and propagate it with send-to-all. Though, it might have scaling troubles when going huge scale (like 10^5 processes). You could also use any other method to load parameters and just prepare seeds as a pre-processing step. However it also involves some non-trivial work.

这篇关于MPI中每个进程的随机数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆