整数数组中唯一的随机数生成 [英] Unique random number generation in an integer array

查看:63
本文介绍了整数数组中唯一的随机数生成的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

可能重复:
O(1)中的唯一随机数?

如何在C中用唯一值(无重复)填充整数数组?

  int vektor [10];对于(i = 0; i< 10; i ++){vektor [i] = rand()%100 +1;}//这里没有唯一性 

解决方案

有几种方法可以解决您的问题,每种方法都有其优点和缺点.

首先,我想指出的是,您已经获得了许多执行以下操作的响应:它们生成一个随机数,然后以某种方式检查它是否已在数组中使用,如果已经使用过,则它们只需生成另一个数字,直到找到未使用的数字即可.这是一种幼稚的方法,说实话,是一种严重错误的方法.问题在于数字生成的周期性反复试验性质(如果已使用,请重试").如果数值范围(例如[1..N])接近所需数组的长度(例如M),那么到最后,算法可能会花费大量时间来尝试查找下一个数字.如果随机数生成器甚至有点破损(例如,从不生成某个数字,或者很少生成),那么使用N == M可以保证算法永远循环(或循环很长时间).通常,这种反复试验方法是无用的方法,或者充其量是有缺陷的方法.

此处已经介绍的另一种方法是在大小为N的数组中生成随机置换.随机置换的想法是一种很有前途的方法,但是在大小为N的数组上进行随机置换(当M <N时肯定会做到)比喻地说,比光产生更多的热量.

例如,可以在Bentley的"Programming Pearls"(其中一些取自Knuth)中找到解决该问题的好方法.


  • Knuth算法.这是一个非常简单的算法,复杂度为O(N)(即数值范围),这意味着当M接近N时,它最有用.该算法除了您的 vektor 数组外,不需要任何额外的内存,这与已经提供的带有置换的变量相反(这意味着它需要O(M)内存,而不是O(N)作为其他置换此处建议的基于算法的算法).即使对于M << 1,后者也使它成为可行的算法.N例.

算法的工作原理如下:迭代从1到N的所有数字,并以 rm/rn 的概率选择当前数字,其中 rm 是我们仍然有多少个数字需要找到,而 rn 是我们仍然需要迭代的数量.这是您的情况的可能实现方式

  #define M 10#定义N 100int in,im;im = 0;for(in = 0; in< N& im< M; ++ in){int rn = N-in;整数rm = M-im;如果(rand()%rn  

在此循环之后,我们得到一个数组 vektor ,其中填充有随机选择的数字升序.这里不需要升序"位.因此,为了进行修复",我们仅对 vektor 的元素进行了随机排列就可以了.请注意,这是一个O(M)排列,不需要额外的内存.(我省略了置换算法的实现.这里已经给出了很多链接.).

如果仔细看一下这里提出的基于长度为N的数组的基于置换的算法,您会发现它们中的大多数几乎都是相同的Knuth算法,但是对 M进行了重新公式化== N .在这种情况下,上述选择周期将选择概率为1的[1..N]范围中的每个数字,从而有效地初始化为编号为1到N的N数组.考虑到这一点,我认为它变得相当很明显,对于 M == N 运行此算法然后将结果截断(可能会丢弃大部分结果),比对M的原始值运行原始算法并获取M的意义要小得多.结果立即出现,没有任何截断.


  • 弗洛伊德算法(请参见

    (尽管此算法有一件事:虽然结果数组将不会被排序,但结果中仍将存在原始1..N排序的某种影响".例如,很明显,数字如果选择N,则只能是结果数组的最后一个成员.如果不希望的排序结果对结果的这种污染"是不可接受的,则可以对结果 vektor 数组进行随机混洗,就像在Khuth算法中一样.


    请注意在设计这两个算法时观察到的非常关键的一点:它们从不 loop ,而是试图找到一个新的未使用的随机数.从实用的角度来看,任何使用随机数进行反复试验迭代的算法都是有缺陷的.而且,这些算法的内存消耗与M无关,与N无关.

    我向OP推荐Floyd算法,因为在他的应用中M似乎比N小得多,并且它不需要(或可能不需要)额外的遍历来进行排列.但是,对于如此小的N值,其差异可以忽略不计.

    Possible Duplicate:
    Unique random numbers in O(1)?

    How do I fill an integer array with unique values (no duplicates) in C?

    int vektor[10];   
    
    for (i = 0; i < 10; i++) {
        vektor[i] = rand() % 100 + 1;
    }
    
    //No uniqueness here
    

    解决方案

    There are several ways to solve your problem, each has its own advantages and disadvantages.

    First I'd like to note that you already got quite a few of responses that do the following: they generate a random number, then check somehow whether it was already used in the array, and if it was already used, they just generate another number until they find an unused one. This is a naive and, truth to be said, seriously flawed approach. The problem is with the cyclic trial-and-error nature of the number generation ("if already used, try again"). If the numeric range (say, [1..N]) is close to the length of the desired array (say, M), then towards the end the algorithm might spend a huge amount of time trying to find the next number. If the random number generator is even a little bit broken (say, never generates some number, or does it very rarely), then with N == M the algorithm is guaranteed to loop forever (or for a very long time). Generally this trial-and-error approach is a useless one, or a flawed one at best.

    Another approach already presented here is generating a random permutation in an array of size N. The idea of random permutation is a promising one, but doing it on an array of size N (when M << N) will certainly generate more heat than light, speaking figuratively.

    Good solutions to this problem can be found, for example, in Bentley's "Programming Pearls" (and some of them are taken from Knuth).


    • The Knuth algorithm. This is a very simple algorithm with a complexity of O(N) (i.e. the numeric range), meaning that it is most usable when M is close to N. However, this algorithm doesn't require any extra memory in addition to your vektor array, as opposed to already offered variant with permutations (meaning that it takes O(M) memory, not O(N) as other permutation-based algorithms suggested here). The latter makes it a viable algorithm even for M << N cases.

    The algorithm works as follows: iterate through all numbers from 1 to N and select the current number with probability rm / rn, where rm is how many numbers we still need to find, and rn is how many numbers we still need to iterate through. Here's a possible implementation for your case

    #define M 10
    #define N 100
    
    int in, im;
    
    im = 0;
    
    for (in = 0; in < N && im < M; ++in) {
      int rn = N - in;
      int rm = M - im;
      if (rand() % rn < rm)    
        /* Take it */
        vektor[im++] = in + 1; /* +1 since your range begins from 1 */
    }
    
    assert(im == M);
    

    After this cycle we get an array vektor filled with randomly chosen numbers in ascending order. The "ascending order" bit is what we don't need here. So, in order to "fix" that we just make a random permutation of elements of vektor and we are done. Note, that the this is a O(M) permutation requiring no extra memory. (I leave out the implementation of the permutation algorithm. Plenty of links was given here already.).

    If you look carefully at the permutation-based algorithms proposed here that operate on an array of length N, you'll see that most of them are pretty much this very same Knuth algorithm, but re-formulated for M == N. In that case the above selection cycle will chose each and every number in [1..N] range with probabilty 1, effectively turning into initialization of an N-array with numbers 1 to N. Taking this into account, I think it becomes rather obvious that running this algorithm for M == N and then truncating the result (possibly discarding most of it) makes much less sense than just running this algorithm in its original form for the original value of M and getting the result right away, without any truncation.


    • The Floyd algorithm (see here). This approach has the complexity of about O(M) (depends on the search structure used), so it is better suitable when M << N. This approach keeps track of already generated random numbers, so it requires extra memory. However, the beauty of it is that it does not make any of those abominable trial-and-error iterations, trying to find an unused random number. This algorithm is guaranteed to generate one unique random number after each call to the random number generator.

    Here's a possible implementation for it for your case. (There are different ways to keep track of already used numbers. I'll just use an array of flags, assuming that N is not prohibitively large)

    #define M 10
    #define N 100    
    
    unsigned char is_used[N] = { 0 }; /* flags */
    int in, im;
    
    im = 0;
    
    for (in = N - M; in < N && im < M; ++in) {
      int r = rand() % (in + 1); /* generate a random number 'r' */
    
      if (is_used[r])
        /* we already have 'r' */
        r = in; /* use 'in' instead of the generated number */
    
      assert(!is_used[r]);
      vektor[im++] = r + 1; /* +1 since your range begins from 1 */
      is_used[r] = 1;
    }
    
    assert(im == M);
    

    Why the above works is not immediately obvious. But it works. Exactly M numbers from [1..N] range will be picked with uniform distribution.

    Note, that for large N you can use a search-based structure to store "already used" numbers, thus getting a nice O(M log M) algorithm with O(M) memory requirement.

    (There's one thing about this algorithm though: while the resultant array will not be ordered, a certain "influence" of the original 1..N ordering will still be present in the result. For example, it is obvious that number N, if selected, can only be the very last member of the resultant array. If this "contamination" of the result by the unintended ordering is not acceptable, the resultant vektor array can be random-shuffled, just like in the Khuth algorithm).


    Note the very critical point observed in the design of these two algoritms: they never loop, trying to find a new unused random number. Any algorithm that makes trial-and-error iterations with random numbers is flawed from practical point of view. Also, the memory consumption of these algorithms is tied to M, not to N

    To the OP I would recommend the Floyd's algorithm, since in his application M seems to be considerably less than N and that it doesn't (or may not) require an extra pass for permutation. However, for such small values of N the difference might be negligible.

    这篇关于整数数组中唯一的随机数生成的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆