大数据库文件的 SQLite 磁盘 I/O 错误 [英] SQLite disk I/O error with large DB file

查看:54
本文介绍了大数据库文件的 SQLite 磁盘 I/O 错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的 C# 程序中,我正在读取大约 350GB 的压缩 CSV 文件并将数据存储在 SQLite v3 数据库中.我正在使用 NuGet 的 System.Data.SQLite.

In my C# program, I am reading in about 350GB worth of zipped CSV files and storing the data in a SQLite v3 database. I'm using System.Data.SQLite from NuGet.

我的数据库目前大约 147GB,在尝试运行下一个 INSERT 查询时出现错误:

My database is about 147GB at the moment, and I am getting an error when trying to run the next INSERT query:

(778) os_win.c:41557: (665) winWrite2(D:\System.db) - 由于文件系统限制,请求的操作无法完成.
错误:第 1 行附近:磁盘 I/O 错误."

(778) os_win.c:41557: (665) winWrite2(D:\System.db) - The requested operation could not be completed due to a file system limitation.
Error: near line 1: disk I/O error."

驱动器为 1.81TiB,并且有 1.37TiB 空闲.卷是 NTFS.数据库是 146650432KiB.ChkDsk 报告一切正常,驱动器在其他方面工作正常.

The drive is 1.81TiB and has 1.37TiB free. The volume is NTFS. The DB is 146650432KiB. ChkDsk reports everything is OK, and the drive is otherwise working perfectly.

从我的程序和数据库浏览器应用程序中的任何插入都会发生这种情况.

This happens for any INSERT from my program as well as from the DB Browser application.

(如果我运行 VACUUM 操作,数据库可能会大幅缩小,因为我已经运行了很多 INSERT 语句)

(The database would probably shrink substantially if I ran a VACUUM operation, since I have run a lot of INSERT statements)

我估计 12 个表中大约有 35 亿行.根据我的阅读,这种规模的 SQLite 数据库应该没有问题.

I estimate there are approximately 3.5 billion rows in 12 tables. From what I've read, there should be no issues with a SQLite database of this size.

任何想法为什么会发生这种情况以及如何解决它?理想情况下,如果我不必从头开始整个导入过程,那就太好了,因为到这一步需要几天时间,只完成了大约 20%.

Any ideas why this might be happening and how to work around it? Ideally, it would be great if I didn't have to start the whole import process from scratch as it's taken a few days to get to this point, which is only about 20% complete.

推荐答案

Windows 错误代码 665 的意思正是消息所说的:文件系统已达到某个限制.

The Windows error code 665 means exactly what the message says: the file system has reached some limit.

限制不是文件大小本身,而是管理碎片文件所需的内部结构数量.(完全相同的问题 发生在 Microsoft 自己的 SQL Server.)

The limit is not on the file size itself, but on the number of internal structures needed to manage a fragmented file. (The exact same problem happens with Microsoft's own SQL Server.)

微软有一个 hot fix 允许您使用为这些结构保留更多空间的文件系统重新格式化驱动器.但现在解决此问题的更简单方法是运行碎片整理程序,或者只是制作文件的新副本(如果您有足够的可用空间).

Microsoft has a hot fix that would allow you reformat the drive with a file system that reserves more space for these structures. But an easier way to fix this problem now would be to run a defragmenter, or to just make a new copy of the file (if you have enough free space).

如果可能,尽量避免文件碎片化,即在填充数据库时不要将其他数据写入同一文件系统.或者,为了避免较小的碎片,通过临时插入大行来大量扩展数据库文件的大小:

If possible, try to avoid fragmenting the file, i.e., don't write other data to the same file system while filling the database. Alternatively, to avoid smaller fragments, extend the size of the database file by a large amount by temporarily inserting a large row:

CREATE TABLE t(x);
INSERT INTO t VALUES(zeroblob(500*1024*1024));  -- 500 MB
DROP TABLE t;

这篇关于大数据库文件的 SQLite 磁盘 I/O 错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆