巨大的JavaScript HTML5 blob(来自大型ArrayBuffers)在客户端构建巨型文件 [英] Huge JavaScript HTML5 blob (from large ArrayBuffers) to build a giant file in client side

查看:194
本文介绍了巨大的JavaScript HTML5 blob(来自大型ArrayBuffers)在客户端构建巨型文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个Web浏览器应用程序(客户端),它从许多位置下载大量的块并加入它们来构建一个blob。然后将该blob作为公共文件保存到本地文件系统。我这样做的方式是通过ArrayBuffer对象和blob。



$ b var blob = new Blob([ArrayBuffer1 ,ArrayBuffer2,ArrayBuffer3,...],{type:mimetype})



(直到700 MB aprox),但浏览器崩溃与较大的文件。我知道RAM内存有其限制。这种情况是我需要构建blob来生成一个文件,但是我想让用户下载比这个尺寸大得多的文件(想象一下,例如大约8GB的文件)。

¿如何构建避免大小限制的blob? LocalStorage比RAM更有限,所以我不知道该使用什么或如何去做。

看起来像你只是将数组数据连接在一起?为什么不把数组缓冲区附加在一个巨大的blob中。您必须一次迭代并追加每个arrayBuffer。您将寻找到filewriter的末尾以追加数组。并且为了只读取部分巨型blob,您可以获得blob的 slice 以避免浏览器崩溃。



追加函数


$ b $ pre $ 函数appendToFile(fPath,data,callback){
fs。 root.getFile(fPath,{
create:false
},function(fileEntry){
fileEntry.createWriter(function(writer){
writer.onwriteend = function(e) {
callback();
};
writer.seek(writer.length);
var blob = new Blob([data]);
writer.write (blob);
},errorHandler);
},errorHandler);
}

再次避免读取整个blob,只读取您的部分/块当你生成你提到的文件时,你会看到巨大的blob。



部分读取功能

<$ p $函数(fileEntry)函数getPartialBlobFromFile(fPath,start,stop,callback){
fs.root.getFile(fPath,{
creation:false
} {
fileEntry.file(function(file){
var reader = new FileReader();
reader.onloadend = function(evt){
if(evt.target.readyState == FileReader.DONE){
callback(evt.target.result);
}
};
stop = Math.min(stop,file.size);
reader.readAsArrayBuffer(file.slice(start,stop));
},errorHandler)
},errorHandler);



$ b您可能必须保留索引,或许在您的巨型BLOB的标题部分 - 我需要知道更多,才能提供更准确的反馈。




更新 - 避免配额限制,临时与持久性
回复您的评论


您似乎遇到存储配额问题,因为您正在使用临时存储。以下是从google中借用的代码片段:此处


临时存储在浏览器中运行的所有网络应用程序之间共享。共享池可以达到可用磁盘空间的一半。共享池的计算中包含已由应用程序使用的存储;也就是说,计算基于(可用存储空间+应用程序使用的存储)* .5。

共享池。例如,如果总可用磁盘空间为50 GB,则共享池为25 GB,应用程序最多可以有5 GB。这是从可用磁盘空间(50 GB)的20%(高达5 GB)一半(高达25 GB)计算而得。

为了避免这种限制,您必须切换到持久性,它将允许您配额到磁盘上的可用空间。为此,请使用以下命令初始化文件系统,而不是临时存储请求。

  navigator.webkitPersistentStorage.requestQuota(1024 * 1024 * 5,
函数(gB){
window.requestFileSystem(PERSISTENT,gB,onInitFs,errorHandler);
},函数(e){
console.log 'Error',e);
})


I'm writing a web browser app (client-side) that downloads a huge amount of chunks from many locations and joins them to build a blob. Then that blob is saved to local filesystem as a common file. The way I'm doing this is by mean of ArrayBuffer objects and a blob.

var blob = new Blob([ArrayBuffer1, ArrayBuffer2, ArrayBuffer3, ...], {type: mimetype})

This works ok for small and medium-sized files (until 700 MB aprox), but browser crashes with larger files. I understand that RAM memory has its limits. The case is that I need to build the blob in order to generate a file, but I wanna allow users to download files much larger than that size (imagine, for instance, files about 8GB).

¿How can I build the blob avoiding size limits? LocalStorage is more limited than RAM, so I do not know what to use or how to do it.

解决方案

It looks like you are just concatenating arrays of data together? Why not go about appending the array-buffers together in a giant blob. You'd have to iterate and append each arrayBuffer one at a time. You would seek to the end of the filewriter to append arrays. And for reading only portions of your giant blob back you get a slice of the blob to avoid the browser crashing.

Appending Function

function appendToFile(fPath,data,callback){
    fs.root.getFile(fPath, {
        create: false
    }, function(fileEntry) {
        fileEntry.createWriter(function(writer) {
            writer.onwriteend = function(e) {
                callback();
            };
            writer.seek(writer.length);
            var blob = new Blob([data]);
            writer.write(blob);
        }, errorHandler);
    }, errorHandler);
}

Again to avoid reading the entire blob back, only read portions/chunks of your giant blob when generating the file you mention.

Partial Read Function

function getPartialBlobFromFile(fPath,start,stop,callback){
    fs.root.getFile(fPath, {
        creation:false
    }, function(fileEntry){
        fileEntry.file(function(file){
            var reader = new FileReader();
            reader.onloadend = function(evt){
                if(evt.target.readyState == FileReader.DONE){
                    callback(evt.target.result);
                }
            };
            stop = Math.min(stop,file.size);
            reader.readAsArrayBuffer(file.slice(start,stop));
        }, errorHandler)
    }, errorHandler);
}

You may have to keep indexes, perhaps in a header section of your giant BLOB - I would need to know more before I could give more precise feedback.


Update - avoiding quota limits, Temporary vs Persistent in response to your comments below
It appears that you are running into issues with storage quota because you are using temporary storage. The following is a snippet borrowed from google found here

Temporary storage is shared among all web apps running in the browser. The shared pool can be up to half of the of available disk space. Storage already used by apps is included in the calculation of the shared pool; that is to say, the calculation is based on (available storage space + storage being used by apps) * .5 .

Each app can have up to 20% of the shared pool. As an example, if the total available disk space is 50 GB, the shared pool is 25 GB, and the app can have up to 5 GB. This is calculated from 20% (up to 5 GB) of half (up to 25 GB) of the available disk space (50 GB).

To avoid this limit you'll have to switch to persistent, it will allow you to quota up to the available free space on the disk. To do this use the following to initialize the File-system instead of the temporary storage request.

navigator.webkitPersistentStorage.requestQuota(1024*1024*5, 
  function(gB){
  window.requestFileSystem(PERSISTENT, gB, onInitFs, errorHandler);
}, function(e){
  console.log('Error', e);
})

这篇关于巨大的JavaScript HTML5 blob(来自大型ArrayBuffers)在客户端构建巨型文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆