QFile寻求性能 [英] QFile seek performance

查看:162
本文介绍了QFile寻求性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

看起来 QFile 使用普通文件时(不是特别的Linux I / O设备文件)是随机访问,这意味着一个查找操作具有恒定的时间复杂度O(1)。

然而,我一直无法确认这一点。一般来说,当跳转到文件中的特定位置(用于写入或读取)时, std :: fstream 和 QFile 提供恒定时间的运行时间复杂度?

解决方案

简短的回答是是的,出于实际的目的。长的答案是...这很复杂。

寻找文件流最终调用基础文件描述符的lseek(),其性能取决于内核。 p>

运行时间取决于您使用的文件系统以及文件的大小。随着文件变大,随机查找需要追加更多级别的间接索引块。但是,即使对于文件高达2 ^ 64字节,级别的数量也只是少数。

因此理论上,寻找可能是O(log n)。但在实践中,现代文件系统本质上是不变的。


It appears that QFile when working with a regular file (not a special Linux I/O device file) is random access, meaning that a seek operation has constant-time complexity O(1).

However, I haven't been able to confirm this. In general, when jumping to a specific position in a file (for writing or reading), does std::fstream and QFile provide constant-time running time complexity?

解决方案

The short answer is "yes, for practical purposes". The long answer is... It's complicated.

Seeking on a file stream ultimately calls lseek() on the underlying file descriptor, whose performance depends on the kernel.

The running time will depend on what file system you are using and how large the files are. As the files get larger, random seeks require chasing more levels of "indirect" indexing blocks. But even for files up to 2^64 bytes, the number of levels is just a handful.

So in theory, seeking is probably O(log n); but in practice, it is essentially constant for a modern file system.

这篇关于QFile寻求性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆