/dev/random非常慢? [英] /dev/random Extremely Slow?

查看:373
本文介绍了/dev/random非常慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

一些背景信息:我正在寻找在Red Hat服务器上运行脚本以从/dev/random中读取一些数据,并使用Perl unpack()命令将其转换为十六进制字符串以供以后使用(对数据库进行基准测试)操作).我在/dev/random上运行了一些"head -1",它看起来还不错,但是在几次调用之后,它还是会挂起.几分钟后,它将最终输出一小段文本,然后结束.

我切换到/dev/urandom(我真的不想这么做,它速度较慢,并且我不需要那种随机性),并且在前两三个呼叫中都可以正常工作,然后它也开始挂起. 我想知道是否是轰炸它的"head"命令,所以我尝试使用Perl做一些简单的I/O,并且它也挂了. 作为最后的努力,我使用"dd"命令将其中的一些信息直接转储到文件中,而不是转储到终端中.我只问它只有1mb的数据,但花了3分钟才能获得〜400字节,然后才杀死它.

我检查了进程列表,CPU和内存基本上未受影响.究竟是什么会引起/dev/random这样的混乱,我该怎么做才能在将来防止/修复它?

感谢您的帮助!看来我把随机和乌兰多姆混在一起了.我已经启动并运行了脚本.看来我今天学到了一些新东西. :)

解决方案

在大多数Linux系统上,/dev/random由环境收集的实际熵提供动力.如果您的系统没有从/dev/random提供大量数据,则可能意味着您没有生成足够的环境随机性来为其供电.

我不确定您为什么认为/dev/urandom是较慢"或较高的质量.它重用内部熵池来生成伪随机性-使其质量略低-但不会阻塞.通常,不需要高级或长期加密的应用程序可以可靠地使用/dev/urandom.

尝试稍等片刻,然后再次从/dev/urandom中读取.您可能已经从/dev/random用尽了很多内部熵池读数,从而破坏了两个生成器-允许系统创建更多熵应补充它们.

有关/dev/random/dev/urandom的更多信息,请参见维基百科. /p>

Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.

I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.

I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?

Edit: Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)

解决方案

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

See Wikipedia for more info about /dev/random and /dev/urandom.

这篇关于/dev/random非常慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆