在不写入磁盘的情况下下载和解压缩 .zip 文件 [英] Downloading and unzipping a .zip file without writing to disk

查看:58
本文介绍了在不写入磁盘的情况下下载和解压缩 .zip 文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经设法让我的第一个 python 脚本工作,它从 URL 下载 .ZIP 文件列表,然后继续提取 ZIP 文件并将它们写入磁盘.

I have managed to get my first python script to work which downloads a list of .ZIP files from a URL and then proceeds to extract the ZIP files and writes them to disk.

我现在不知如何实现下一步.

I am now at a loss to achieve the next step.

我的主要目标是下载并解压缩 zip 文件并通过 TCP 流传递内容(CSV 数据).如果可以的话,我宁愿不将任何 zip 或提取的文件实际写入磁盘.

My primary goal is to download and extract the zip file and pass the contents (CSV data) via a TCP stream. I would prefer not to actually write any of the zip or extracted files to disk if I could get away with it.

这是我当前的脚本,它可以工作,但不幸的是必须将文件写入磁盘.

Here is my current script which works but unfortunately has to write the files to disk.

import urllib, urllister
import zipfile
import urllib2
import os
import time
import pickle

# check for extraction directories existence
if not os.path.isdir('downloaded'):
    os.makedirs('downloaded')

if not os.path.isdir('extracted'):
    os.makedirs('extracted')

# open logfile for downloaded data and save to local variable
if os.path.isfile('downloaded.pickle'):
    downloadedLog = pickle.load(open('downloaded.pickle'))
else:
    downloadedLog = {'key':'value'}

# remove entries older than 5 days (to maintain speed)

# path of zip files
zipFileURL = "http://www.thewebserver.com/that/contains/a/directory/of/zip/files"

# retrieve list of URLs from the webservers
usock = urllib.urlopen(zipFileURL)
parser = urllister.URLLister()
parser.feed(usock.read())
usock.close()
parser.close()

# only parse urls
for url in parser.urls: 
    if "PUBLIC_P5MIN" in url:

        # download the file
        downloadURL = zipFileURL + url
        outputFilename = "downloaded/" + url

        # check if file already exists on disk
        if url in downloadedLog or os.path.isfile(outputFilename):
            print "Skipping " + downloadURL
            continue

        print "Downloading ",downloadURL
        response = urllib2.urlopen(downloadURL)
        zippedData = response.read()

        # save data to disk
        print "Saving to ",outputFilename
        output = open(outputFilename,'wb')
        output.write(zippedData)
        output.close()

        # extract the data
        zfobj = zipfile.ZipFile(outputFilename)
        for name in zfobj.namelist():
            uncompressed = zfobj.read(name)

            # save uncompressed data to disk
            outputFilename = "extracted/" + name
            print "Saving extracted file to ",outputFilename
            output = open(outputFilename,'wb')
            output.write(uncompressed)
            output.close()

            # send data via tcp stream

            # file successfully downloaded and extracted store into local log and filesystem log
            downloadedLog[url] = time.time();
            pickle.dump(downloadedLog, open('downloaded.pickle', "wb" ))

推荐答案

我的建议是使用 StringIO 对象.它们模拟文件,但驻留在内存中.所以你可以做这样的事情:

My suggestion would be to use a StringIO object. They emulate files, but reside in memory. So you could do something like this:

# get_zip_data() gets a zip archive containing 'foo.txt', reading 'hey, foo'

import zipfile
from StringIO import StringIO

zipdata = StringIO()
zipdata.write(get_zip_data())
myzipfile = zipfile.ZipFile(zipdata)
foofile = myzipfile.open('foo.txt')
print foofile.read()

# output: "hey, foo"

或者更简单(向 Vishal 道歉):

Or more simply (apologies to Vishal):

myzipfile = zipfile.ZipFile(StringIO(get_zip_data()))
for name in myzipfile.namelist():
    [ ... ]

在 Python 3 中使用 BytesIO 而不是 StringIO:

In Python 3 use BytesIO instead of StringIO:

import zipfile
from io import BytesIO

filebytes = BytesIO(get_zip_data())
myzipfile = zipfile.ZipFile(filebytes)
for name in myzipfile.namelist():
    [ ... ]

这篇关于在不写入磁盘的情况下下载和解压缩 .zip 文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆