下载大文件并将其拆分为blob存储中的100 MB块 [英] download and split large file into 100 MB chunks in blob storage

查看:67
本文介绍了下载大文件并将其拆分为blob存储中的100 MB块的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Blob存储中有一个2GB的文件,并且正在构建一个控制台应用程序,该应用程序会将这个文件下载到桌面上.要求是将其拆分为100MB的块,并在文件名中附加一个数字.我不需要重新组合这些文件.我只需要几块文件.

I have a 2GB file in blob storage and am building a console application that will download this file into a desktop. Requirement is to split into 100MB chunks and append a number into the filename. I do not need to re-combine those files again. What I need is only the chunks of files.

我目前从 Azure下载Blob部分

但是当文件大小已经为100MB并创建一个新文件时,我不知道如何停止下载.

But I cannot figure out how to stop downloading when the file size is already 100MB and create a new one.

任何帮助将不胜感激.

更新:这是我的代码

CloudStorageAccount account = CloudStorageAccount.Parse(connectionString);
            var blobClient = account.CreateCloudBlobClient();
            var container = blobClient.GetContainerReference(containerName);
            var file = uri;
            var blob = container.GetBlockBlobReference(file);
            //First fetch the size of the blob. We use this to create an empty file with size = blob's size
            blob.FetchAttributes();
            var blobSize = blob.Properties.Length;
            long blockSize = (1 * 1024 * 1024);//1 MB chunk;
            blockSize = Math.Min(blobSize, blockSize);
            //Create an empty file of blob size
            using (FileStream fs = new FileStream(file, FileMode.Create))//Create empty file.
            {
                fs.SetLength(blobSize);//Set its size
            }
            var blobRequestOptions = new BlobRequestOptions
            {
                RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(5), 3),
                MaximumExecutionTime = TimeSpan.FromMinutes(60),
                ServerTimeout = TimeSpan.FromMinutes(60)
            };
            long startPosition = 0;
            long currentPointer = 0;
            long bytesRemaining = blobSize;
            do
            {
                var bytesToFetch = Math.Min(blockSize, bytesRemaining);
                using (MemoryStream ms = new MemoryStream())
                {
                    //Download range (by default 1 MB)
                    blob.DownloadRangeToStream(ms, currentPointer, bytesToFetch, null, blobRequestOptions);
                    ms.Position = 0;
                    var contents = ms.ToArray();
                    using (var fs = new FileStream(file, FileMode.Open))//Open that file
                    {
                        fs.Position = currentPointer;//Move the cursor to the end of file.
                        fs.Write(contents, 0, contents.Length);//Write the contents to the end of file.
                    }
                    startPosition += blockSize;
                    currentPointer += contents.Length;//Update pointer
                    bytesRemaining -= contents.Length;//Update bytes to fetch

                    Console.WriteLine(fileName + dateTimeStamp + ".csv " + (startPosition / 1024 / 1024) + "/" + (blob.Properties.Length / 1024 / 1024) + " MB downloaded...");
                }
            }
            while (bytesRemaining > 0);

推荐答案

根据我的理解,您可以将blob文件分解为预期的大小(100MB),然后利用

Per my understanding, you could break your blob file into your expected pieces (100MB), then leverage CloudBlockBlob.DownloadRangeToStream to download each of your chunks of files. Here is my code snippet, you could refer to it:

ParallelDownloadBlob

private static void ParallelDownloadBlob(Stream outPutStream, CloudBlockBlob blob,long startRange,long endRange)
{
    blob.FetchAttributes();
    int bufferLength = 1 * 1024 * 1024;//1 MB chunk for download
    long blobRemainingLength = endRange-startRange;
    Queue<KeyValuePair<long, long>> queues = new Queue<KeyValuePair<long, long>>();
    long offset = startRange;
    while (blobRemainingLength > 0)
    {
        long chunkLength = (long)Math.Min(bufferLength, blobRemainingLength);
        queues.Enqueue(new KeyValuePair<long, long>(offset, chunkLength));
        offset += chunkLength;
        blobRemainingLength -= chunkLength;
    }
    Parallel.ForEach(queues,
        new ParallelOptions()
        {
            MaxDegreeOfParallelism = 5
        }, (queue) =>
        {
            using (var ms = new MemoryStream())
            {
                blob.DownloadRangeToStream(ms, queue.Key, queue.Value);
                lock (outPutStream)
                {
                    outPutStream.Position = queue.Key- startRange;
                    var bytes = ms.ToArray();
                    outPutStream.Write(bytes, 0, bytes.Length);
                }
            }
        });
}

程序主程序

var container = storageAccount.CreateCloudBlobClient().GetContainerReference(defaultContainerName);
var blob = container.GetBlockBlobReference("code.txt");
blob.FetchAttributes();
long blobTotalLength = blob.Properties.Length;
long chunkLength = 10 * 1024; //divide blob file into each file with 10KB in size
for (long i = 0; i <= blobTotalLength; i += chunkLength)
{

    long startRange = i;
    long endRange = (i + chunkLength) > blobTotalLength ? blobTotalLength : (i + chunkLength);

    using (var fs = new FileStream(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, $"resources\\code_[{startRange}]_[{endRange}].txt"), FileMode.Create))
    {
        Console.WriteLine($"\nParallelDownloadBlob from range [{startRange}] to [{endRange}] start...");
        Stopwatch sp = new Stopwatch();
        sp.Start();

        ParallelDownloadBlob(fs, blob, startRange, endRange);
        sp.Stop();
        Console.WriteLine($"download done, time cost:{sp.ElapsedMilliseconds / 1000.0}s");
    }
}

结果

更新:

根据您的要求,我建议您可以将blob下载到一个文件中,然后利用

Based on your requirement, I recommend that you could download your blob into a single file, then leverage LumenWorks.Framework.IO to read your large file records line by line, then check the byte size you have read and save into a new csv file with the size up to 100MB. Here is a code snippet, you could refer to it:

using (CsvReader csv = new CsvReader(new StreamReader("data.csv"), true))
{
    int fieldCount = csv.FieldCount;
    string[] headers = csv.GetFieldHeaders();
    while (csv.ReadNextRecord())
    {
        for (int i = 0; i < fieldCount; i++)
            Console.Write(string.Format("{0} = {1};",
                          headers[i],
                          csv[i] == null ? "MISSING" : csv[i]));
        //TODO: 
        //1.Read the current record, check the total bytes you have read;
        //2.Create a new csv file if the current total bytes up to 100MB, then save the current record to the current CSV file.
    }
}

此外,您可以参考快速CSV阅读器 CsvHelper 了解详情.

Additionally, you could refer to A Fast CSV Reader and CsvHelper for more details.

UPDATE2

用于将大型CSV文件分解为具有固定字节的小型CSV文件的代码示例,我使用了 CsvHelper 2.16 .3 对于以下代码段,您可以参考它:

Code sample for breaking large CSV file into small CSV file with the fixed bytes, I used CsvHelper 2.16.3 for the following code snippet, you could refer to it:

string[] headers = new string[0];
using (var sr = new StreamReader(@"C:\Users\v-brucch\Desktop\BlobHourMetrics.csv")) //83.9KB
{
    using (CsvHelper.CsvReader csvReader = new CsvHelper.CsvReader(sr,
        new CsvHelper.Configuration.CsvConfiguration()
        {
            Delimiter = ",",
            Encoding = Encoding.UTF8
        }))
    {
        //check header
        if (csvReader.ReadHeader())
        {
            headers = csvReader.FieldHeaders;
        }

        TextWriter writer = null;
        CsvWriter csvWriter = null;
        long readBytesCount = 0;
        long chunkSize = 30 * 1024; //divide CSV file into each CSV file with byte size up to 30KB

        while (csvReader.Read())
        {
            var curRecord = csvReader.CurrentRecord;
            var curRecordByteCount = curRecord.Sum(r => Encoding.UTF8.GetByteCount(r)) + headers.Count() + 1;
            readBytesCount += curRecordByteCount;

            //check bytes you have read
            if (writer == null || readBytesCount > chunkSize)
            {
                readBytesCount = curRecordByteCount + headers.Sum(h => Encoding.UTF8.GetByteCount(h)) + headers.Count() + 1;
                if (writer != null)
                {
                    writer.Flush();
                    writer.Close();
                }
                string fileName = $"BlobHourMetrics_{Guid.NewGuid()}.csv";
                writer = new StreamWriter(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fileName), true);
                csvWriter = new CsvWriter(writer);
                csvWriter.Configuration.Encoding = Encoding.UTF8;
                //output header field
                foreach (var header in headers)
                {
                    csvWriter.WriteField(header);
                }
                csvWriter.NextRecord();
            }
            //output record field
            foreach (var field in curRecord)
            {
                csvWriter.WriteField(field);
            }
            csvWriter.NextRecord();
        }
        if (writer != null)
        {
            writer.Flush();
            writer.Close();
        }
    }
}

结果

这篇关于下载大文件并将其拆分为blob存储中的100 MB块的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆