/dev/random 非常慢? [英] /dev/random Extremely Slow?

查看:33
本文介绍了/dev/random 非常慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

一些背景信息:我希望在 Red Hat 服务器上运行一个脚本来从/dev/random 读取一些数据,并使用 Perl unpack() 命令将其转换为十六进制字符串以供稍后使用(基准数据库操作).我在/dev/random 上运行了一些head -1",它似乎运行良好,但是在调用了几次之后,它就有点挂了.几分钟后,它最终会输出一小段文本,然后完成.

Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.

我切换到/dev/urandom(我真的不想,它更慢而且我不需要那种随机性)并且它在前两三个调用中运行良好,然后它也开始挂起.我想知道是否是head"命令在轰炸它,所以我尝试使用 Perl 做一些简单的 I/O,但它也挂了.作为最后的努力,我使用dd"命令将其中的一些信息直接转储到文件而不是终端.我所要求的只是 1mb 的数据,但在我杀死它之前需要 3 分钟才能获得大约 400 个字节.

I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.

我查看了进程列表,CPU和内存基本没动过.究竟是什么导致/dev/random 像这样崩溃,我可以做些什么来防止/修复它?

I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?

感谢大家的帮助!似乎我把 random 和 urandom 搞混了.我已经启动并运行了脚本.看来我今天学到了新东西.:)

Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)

推荐答案

在大多数 Linux 系统上,/dev/random 由环境收集的实际熵提供支持.如果您的系统没有从 /dev/random 传送大量数据,则可能意味着您没有产生足够的环境随机性来为其提供动力.

On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

我不知道您为什么认为 /dev/urandom 更慢"或更高质量.它重用内部熵池来生成伪随机性——使其质量略低——但它不会阻塞.通常,不需要高级或长期密码学的应用程序可以可靠地使用 /dev/urandom.

I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

尝试稍等片刻,然后再次从 /dev/urandom 读取.您可能已经耗尽了从 /dev/random 读取的大量内部熵池,破坏了两个生成器 - 允许您的系统创建更多熵应该补充它们.

Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

请参阅维基百科了解有关/dev/random的更多信息code> 和 /dev/urandom.

See Wikipedia for more info about /dev/random and /dev/urandom.

这篇关于/dev/random 非常慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆