在保持串行读取的同时压缩一系列JSON对象? [英] Compressing A Series of JSON Objects While Maintaining Serial Reading?

查看:139
本文介绍了在保持串行读取的同时压缩一系列JSON对象?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一堆 json对象,由于它们占用了过多的磁盘空间,因此我需要对其进行压缩,其中大约20 gigs的价值约为数百万.

I have a bunch of json objects that I need to compress as it's eating too much disk space, approximately 20 gigs worth for a few million of them.

理想情况下,我想做的是分别压缩每个压缩文件,然后在需要阅读它们时,迭代地加载和解压缩每个压缩文件.我尝试通过创建一个文本文件来完成此操作,每行都是通过 zlib 压缩的 json对象,但是此操作失败

Ideally what I'd like to do is compress each individually and then when I need to read them, just iteratively load and decompress each one. I tried doing this by creating a text file with each line being a compressed json object via zlib, but this is failing with a

decompress error due to a truncated stream

我认为这是由于压缩后的字符串包含新行.

which I believe is due to the compressed strings containing new lines.

有人知道这样做的好方法吗?

Anyone know of a good method to do this?

推荐答案

只需使用

Just use a gzip.GzipFile() object and treat it like a regular file; write JSON objects line by line, and read them line by line.

该对象透明地负责压缩,并将缓冲读取,并根据需要解压缩卡盘.

The object takes care of compression transparently, and will buffer reads, decompressing chucks as needed.

import gzip
import json

# writing
with gzip.GzipFile(jsonfilename, 'w') as outfile:
    for obj in objects:
        outfile.write(json.dumps(obj) + '\n')

# reading
with gzip.GzipFile(jsonfilename, 'r') as infile:
    for line in infile:
        obj = json.loads(line)
        # process obj

这样做还有一个好处,就是压缩算法可以利用跨对象的重复 来获得压缩率.

This has the added advantage that the compression algorithm can make use of repetition across objects for compression ratios.

这篇关于在保持串行读取的同时压缩一系列JSON对象?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆