Chrome 内存问题 - 文件 API + AngularJS [英] Chrome memory issue - File API + AngularJS

查看:21
本文介绍了Chrome 内存问题 - 文件 API + AngularJS的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个需要将大文件上传到 Azure BLOB 存储的 Web 应用程序.我的解决方案使用 HTML5 文件 API 切片成块,然后将其作为 blob 块放置,块的 ID 存储在数组中,然后将块作为 blob 提交.

该解决方案在 IE 中运行良好.在 64 位 Chrome 上,我已成功上传 4Gb 文件,但内存使用量非常大(2Gb+).在 32 位 Chrome 上,特定的 chrome 进程将达到 500-550Mb 左右,然后崩溃.

我看不到任何明显的内存泄漏或我可以更改以帮助垃圾收集的内容.我将块 ID 存储在一个数组中,所以显然会有一些内存溢出,但这不应该是大量的.这几乎就像 File API 保存了它切入内存的整个文件.

它是作为从控制器调用的 Angular 服务编写的,我认为只有服务代码是相关的:

(function() {'使用严格';有角的.module('app.core').factory('blobUploadService',['$http', 'stringUtilities',blob上传服务]);函数 blobUploadService($http, stringUtilities) {var defaultBlockSize = 1024 * 1024;//默认为 1024KBvar 秒表 = {};var 状态 = {};var initializeState = 函数(配置){var blockSize = defaultBlockSize;如果(config.blockSize)blockSize = config.blockSize;var maxBlockSize = blockSize;var numberOfBlocks = 1;var 文件 = config.file;var fileSize = file.size;如果(文件大小<块大小){maxBlockSize = 文件大小;}if (fileSize % maxBlockSize === 0) {numberOfBlocks = fileSize/maxBlockSize;} 别的 {numberOfBlocks = parseInt(fileSize/maxBlockSize, 10) + 1;}返回 {最大块大小:最大块大小,numberOfBlocks: numberOfBlocks,totalBytesRemaining:文件大小,当前文件指针:0,blockIds:新数组(),blockIdPrefix: 'block-',字节已上传:0,提交Uri:空,文件:文件,baseUrl: config.baseUrl,sasToken: config.sasToken,fileUrl: config.baseUrl + config.sasToken,进度:config.progress,完成:config.complete,错误:config.error,取消:假};};/* 配置:{baseUrl://blob 文件 uri 的 baseUrl(即 http://<accountName>.blob.core.windows.net/<container>/<blobname>),sasToken://以?为前缀的共享访问签名查询字符串键/值,file://使用 HTML5 文件 API 的文件对象,progress://进度回调函数,complete://完成回调函数,error://错误回调函数,blockSize://使用它来覆盖 defaultBlockSize} */var 上传 = 函数(配置){状态 = 初始化状态(配置);var reader = new FileReader();reader.onloadend = 函数(evt){if (evt.target.readyState === FileReader.DONE && !state.cancelled) {//完成 === 2var uri = state.fileUrl + '&comp=block&blockid=' + state.blockIds[state.blockIds.length - 1];var requestData = new Uint8Array(evt.target.result);$http.put(uri,请求数据,{标题:{'x-ms-blob-type': 'BlockBlob','内容类型':state.file.type},转换请求:[]}).success(function(data, status, headers, config) {state.bytesUploaded += requestData.length;var percentComplete = ((parseFloat(state.bytesUploaded)/parseFloat(state.file.size)) * 100).toFixed(2);if (state.progress) state.progress(percentComplete, data, status, headers, config);uploadFileInBlocks(reader, state);}).错误(功能(数据,状态,标题,配置){if (state.error) state.error(data, status, headers, config);});}};uploadFileInBlocks(reader, state);返回 {取消:函数(){state.cancelled = true;}};};函数取消(){秒表 = {};state.cancelled = true;返回真;}功能开始停止手表(句柄){if (stopWatch[handle] === undefined) {秒表[句柄] = {};stopWatch[handle].start = Date.now();}}功能停止StopWatch(句柄){stopWatch[handle].stop = Date.now();var duration = stopWatch[handle].stop - stopWatch[handle].start;删除秒表[句柄];返回时长;}var commitBlockList = 函数(状态){var uri = state.fileUrl + '&comp=blocklist';var requestBody = '<?xml version="1.0" encoding="utf-8"?><BlockList>';for (var i = 0; i < state.blockIds.length; i++) {requestBody += '<最新>'+ state.blockIds[i] + '';}requestBody += '</BlockList>';$http.put(uri,请求体,{标题:{x-ms-blob-content-type":state.file.type}}).success(function(data, status, headers, config) {if (state.complete) state.complete(data, status, headers, config);}).错误(功能(数据,状态,标题,配置){if (state.error) state.error(data, status, headers, config);//发生错误时异步调用//或者服务器返回带有错误状态的响应.});};var uploadFileInBlocks = function(reader, state) {如果(!状态.取消){如果(state.totalBytesRemaining > 0){var fileContent = state.file.slice(state.currentFilePointer,state.currentFilePointer + state.maxBlockSize);var blockId = state.blockIdPrefix + stringUtilities.pad(state.blockIds.length, 6);state.blockIds.push(btoa(blockId));reader.readAsArrayBuffer(fileContent);state.currentFilePointer += state.maxBlockSize;state.totalBytesRemaining -= state.maxBlockSize;如果(state.totalBytesRemaining < state.maxBlockSize){state.maxBlockSize = state.totalBytesRemaining;}} 别的 {commitBlockList(状态);}}};返回 {上传:上传,取消:取消,startStopWatch:startStopWatch,停止秒表:停止秒表};};})();

有什么方法可以移动对象的范围来帮助 Chrome GC?我看到其他人提到了类似的问题,但我知道 Chromium 已经解决了一些问题.

我应该说我的解决方案很大程度上基于 Gaurav Mantri 的博客文章:

http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/#comment-47480

解决方案

我看不到任何明显的内存泄漏或我可以更改以提供帮助的内容垃圾收集.我很明显地将块 ID 存储在一个数组中会有一些内存蠕变,但这不应该是大量的.它是几乎就好像 File API 保存了它切入的整个文件记忆.

你说得对.由 .slice() 创建的新 Blob 保存在内存中.

解决方法是在处理BlobFileBlob引用上调用Blob.prototype.close()代码>对象完成.

另请注意,如果 upload 函数被多次调用,则在 javascript 处的 Question 还会创建一个 FileReader 的新实例.

<块引用>

4.3.1.切片方法

slice() 方法返回一个新的 Blob 对象,其字节范围为从可选的 start 参数到但不包括可选的 end 参数,并带有一个 type 属性,它是可选的 contentType 参数的值.

Blob 实例存在于 document 的生命周期中.尽管 Blob 一旦从 Blob URL Store

中删除就应该被垃圾收集<块引用>

9.6.Blob URL 的生命周期

注意:用户代理可以自由地垃圾收集从Blob URL 存储.

<块引用>

每个Blob 必须有一个内部快照状态,必须是最初设置为底层存储的状态,如果有的话底层存储存在,必须通过StructuredClone.快照状态的进一步规范定义可以可以找到 Files.

<块引用>

4.3.2.关闭方法

close() 方法被称为 close 一个 Blob,并且必须充当如下:

  1. 如果上下文对象的可读性状态CLOSED,终止该算法.
  2. 否则,将上下文对象可读性状态设置为CLOSED.
  3. 如果上下文对象在 Blob URL Store,删除对应于 context object 的条目.

如果Blob 对象被传递给URL.createObjectURL(),则在Blob 上调用URL.revokeObjectURL()> 或 File 对象,然后调用 .close().

<块引用>

revokeObjectURL(url) 静态方法

<块引用>

撤销Blob URLstring url 通过从 Blob URL 存储中删除相应的条目.这个方法必须起作用如下:1. 如果 url 指的是一个 Blobreadability stateCLOSED 或者如果为 url 参数提供的值是不是 Blob URL,或者如果为 url 参数提供的值确实Blob URL Store 中没有条目,此方法调用没有没有.用户代理可能会在错误控制台上显示一条消息.2. 否则,用户代理必须删除条目Blob URL Store 用于 url.

您可以通过打开

来查看这些调用的结果

chrome://blob-internals

查看创建 Blob 和关闭 Blob 的调用前后的详细信息.

例如来自

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx引用数:1内容类型:文本/纯文本类型:数据长度:3

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx引用数:1内容类型:文本/纯文本

在调用 .close() 之后.同样来自

blob:http://example.com/c2823f75-de26-46f9-a4e5-95f57b8230bdUuid:29e430a6-f093-40c2-bc70-2b6838a713bc

<小时>

另一种方法是将文件作为 ArrayBuffer 或数组缓冲区块发送.然后在服务器上重新组装文件.

或者你可以调用FileReader构造函数、FileReader.prototype.readAsArrayBuffer()FileReaderload事件> 每一次.

FileReaderload 事件将ArrayBuffer 传递给Uint8Array,使用ReadableStream, TypedArray.prototype.subarray(), .getReader(), .read() 获取NArrayBuffer 作为 TypedArraypullUint8Array .当 N 个块等于 ArrayBuffer.byteLength 处理完后,将 Uint8Array 的数组传递给 Blob 构造函数,用于在浏览器中将文件部分重新组合成单​​个文件;然后将 Blob 发送到服务器.

<头><身体><输入id=文件"类型=文件"><br><progress value="0"></progress><br><output for="file"><img alt="preview"></output><script type="text/javascript">const [输入,输出,img,进度,fr,handleError,CHUNK] = [document.querySelector("input[type='file']"), document.querySelector("output[for='file']"), document.querySelector("输出图片"), document.querySelector("进度"), 新文件阅读器, (错误) =>控制台日志(错误), 1024 * 1024];progress.addEventListener("progress", e => {progress.value = e.detail.value;e.detail.promise();});让 [chunks, NEXT, CURR, url, blob] = [Array(), 0, 0];input.onchange = () =>{NEXT = CURR =progress.value =progress.max = chunks.length = 0;如果(网址){URL.revokeObjectURL(url);如果(blob.hasOwnProperty(关闭")){blob.close();}}如果(输入.文件.长度){控制台日志(输入文件[0]);progress.max = input.files[0].size;progress.step = progress.max/CHUNK;fr.readAsArrayBuffer(input.files[0]);}}fr.onload = () =>{const VIEW = new Uint8Array(fr.result);const LEN = VIEW.byteLength;const {type, name:filename} = input.files[0];const 流 = 新可读流({拉(控制器){如果(下一个< LEN){控制器.enqueue(VIEW.subarray(NEXT, !NEXT ? CHUNK : CHUNK + NEXT));下一个 += 块;} 别的 {控制器关闭();}},取消(原因){控制台日志(原因);抛出新的错误(原因);}});const [读者,进程数据] = [流.getReader(), ({value, done}) =>{如果(完成){返回 reader.closed.then(() => chunks);}块.推(值);返回新的承诺(解决 => {progress.dispatchEvent(新自定义事件(进度",{细节:{value:CURR += value.byteLength,承诺:解决}}));}).then(() => reader.read().then(data => processData(data))).catch(e => reader.cancel(e))}];读者阅读().then(data => processData(data)).then(数据=> {blob = new Blob(data, {type});控制台日志(完成",数据,blob);如果(/图像/.测试(类型)){url = URL.createObjectURL(blob);img.onload = () =>{img.title = 文件名;input.value = "";}img.src = url;} 别的 {input.value = "";}}).catch(e => handleError(e))}

plnkr http://plnkr.co/edit/AEZ7iQce4QaJOKut71jk?p=preview

<小时>

你也可以使用利用 fetch()

fetch(new Request("/path/to/server/", {method:"PUT", body:blob}))

<块引用>

传输正文对于 request request,运行这些步骤:

  1. body 成为请求的 body.
  2. 如果 body 为空,则在 request 上排队一个获取任务以处理 request 和中止这些步骤.

  3. read 成为从 body 的 流中读取块的结果.

    • 当使用done 属性为false 且value 属性为Uint8Array<的对象完成read 时/code> 对象,运行这些子步骤:

      1. bytes 成为由 Uint8Array 对象表示的字节序列.
      2. 传输字节.

      3. body 传输的字节增加bytes 的长度.

      4. 再次运行上述步骤.

    • read被一个done属性为true的对象完成时,在request上排队一个获取任务以处理请求结束身体的用于请求.

    • read 的值与上述模式都不匹配,或者 read 被拒绝时,终止正在进行的获取原因致命.

另见

I have a web app that needs to upload large files to Azure BLOB storage. My solution uses HTML5 File API to slice into chunks which are then put as blob blocks, the IDs of the blocks are stored in an array and then the blocks are committed as a blob.

The solution works fine in IE. On 64 bit Chrome I have successfully uploaded 4Gb files but see very heavy memory usage (2Gb+). On 32 bit Chrome the specific chrome process will get to around 500-550Mb and then crash.

I can't see any obvious memory leaks or things I can change to help garbage collection. I store the block IDs in an array so obviously there will be some memory creeep but this shouldn't be massive. It's almost as if the File API is holding the whole file it slices into memory.

It's written as an Angular service called from a controller, I think just the service code is pertinent:

(function() {
    'use strict';

    angular
    .module('app.core')
    .factory('blobUploadService',
    [
        '$http', 'stringUtilities',
        blobUploadService
    ]);

function blobUploadService($http, stringUtilities) {

    var defaultBlockSize = 1024 * 1024; // Default to 1024KB
    var stopWatch = {};
    var state = {};

    var initializeState = function(config) {
        var blockSize = defaultBlockSize;
        if (config.blockSize) blockSize = config.blockSize;

        var maxBlockSize = blockSize;
        var numberOfBlocks = 1;

        var file = config.file;

        var fileSize = file.size;
        if (fileSize < blockSize) {
            maxBlockSize = fileSize;
        }

        if (fileSize % maxBlockSize === 0) {
            numberOfBlocks = fileSize / maxBlockSize;
        } else {
            numberOfBlocks = parseInt(fileSize / maxBlockSize, 10) + 1;
        }

        return {
            maxBlockSize: maxBlockSize,
            numberOfBlocks: numberOfBlocks,
            totalBytesRemaining: fileSize,
            currentFilePointer: 0,
            blockIds: new Array(),
            blockIdPrefix: 'block-',
            bytesUploaded: 0,
            submitUri: null,
            file: file,
            baseUrl: config.baseUrl,
            sasToken: config.sasToken,
            fileUrl: config.baseUrl + config.sasToken,
            progress: config.progress,
            complete: config.complete,
            error: config.error,
            cancelled: false
        };
    };

    /* config: {
      baseUrl: // baseUrl for blob file uri (i.e. http://<accountName>.blob.core.windows.net/<container>/<blobname>),
      sasToken: // Shared access signature querystring key/value prefixed with ?,
      file: // File object using the HTML5 File API,
      progress: // progress callback function,
      complete: // complete callback function,
      error: // error callback function,
      blockSize: // Use this to override the defaultBlockSize
    } */
    var upload = function(config) {
        state = initializeState(config);

        var reader = new FileReader();
        reader.onloadend = function(evt) {
            if (evt.target.readyState === FileReader.DONE && !state.cancelled) { // DONE === 2
                var uri = state.fileUrl + '&comp=block&blockid=' + state.blockIds[state.blockIds.length - 1];
                var requestData = new Uint8Array(evt.target.result);

                $http.put(uri,
                        requestData,
                        {
                            headers: {
                                'x-ms-blob-type': 'BlockBlob',
                                'Content-Type': state.file.type
                            },
                            transformRequest: []
                        })
                    .success(function(data, status, headers, config) {
                        state.bytesUploaded += requestData.length;

                        var percentComplete = ((parseFloat(state.bytesUploaded) / parseFloat(state.file.size)) * 100
                        ).toFixed(2);
                        if (state.progress) state.progress(percentComplete, data, status, headers, config);

                        uploadFileInBlocks(reader, state);
                    })
                    .error(function(data, status, headers, config) {
                        if (state.error) state.error(data, status, headers, config);
                    });
            }
        };

        uploadFileInBlocks(reader, state);

        return {
            cancel: function() {
                state.cancelled = true;
            }
        };
    };

    function cancel() {
        stopWatch = {};
        state.cancelled = true;
        return true;
    }

    function startStopWatch(handle) {
        if (stopWatch[handle] === undefined) {
            stopWatch[handle] = {};
            stopWatch[handle].start = Date.now();
        }
    }

    function stopStopWatch(handle) {
        stopWatch[handle].stop = Date.now();
        var duration = stopWatch[handle].stop - stopWatch[handle].start;
        delete stopWatch[handle];
        return duration;
    }

    var commitBlockList = function(state) {
        var uri = state.fileUrl + '&comp=blocklist';

        var requestBody = '<?xml version="1.0" encoding="utf-8"?><BlockList>';
        for (var i = 0; i < state.blockIds.length; i++) {
            requestBody += '<Latest>' + state.blockIds[i] + '</Latest>';
        }
        requestBody += '</BlockList>';

        $http.put(uri,
                requestBody,
                {
                    headers: {
                        'x-ms-blob-content-type': state.file.type
                    }
                })
            .success(function(data, status, headers, config) {
                if (state.complete) state.complete(data, status, headers, config);
            })
            .error(function(data, status, headers, config) {
                if (state.error) state.error(data, status, headers, config);
                // called asynchronously if an error occurs
                // or server returns response with an error status.
            });
    };

    var uploadFileInBlocks = function(reader, state) {
        if (!state.cancelled) {
            if (state.totalBytesRemaining > 0) {

                var fileContent = state.file.slice(state.currentFilePointer,
                    state.currentFilePointer + state.maxBlockSize);
                var blockId = state.blockIdPrefix + stringUtilities.pad(state.blockIds.length, 6);

                state.blockIds.push(btoa(blockId));
                reader.readAsArrayBuffer(fileContent);

                state.currentFilePointer += state.maxBlockSize;
                state.totalBytesRemaining -= state.maxBlockSize;
                if (state.totalBytesRemaining < state.maxBlockSize) {
                    state.maxBlockSize = state.totalBytesRemaining;
                }
            } else {
                commitBlockList(state);
            }
        }
    };

    return {
        upload: upload,
        cancel: cancel,
        startStopWatch: startStopWatch,
        stopStopWatch: stopStopWatch
    };
};
})();

Are there any ways I can move the scope of objects to help with Chrome GC? I have seen other people mentioning similar issues but understood Chromium had resolved some.

I should say my solution is heavily based on Gaurav Mantri's blog post here:

http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/#comment-47480

解决方案

I can't see any obvious memory leaks or things I can change to help garbage collection. I store the block IDs in an array so obviously there will be some memory creeep but this shouldn't be massive. It's almost as if the File API is holding the whole file it slices into memory.

You are correct. The new Blobs created by .slice() are being held in memory.

The solution is to call Blob.prototype.close() on the Blob reference when processing Blob or File object is complete.

Note also, at javascript at Question also creates a new instance of FileReader if upload function is called more than once.

4.3.1. The slice method

The slice() method returns a new Blob object with bytes ranging from the optional start parameter up to but not including the optional end parameter, and with a type attribute that is the value of the optional contentType parameter.

Blob instances exist for the life of document. Though Blob should be garbage collected once removed from Blob URL Store

9.6. Lifetime of Blob URLs

Note: User agents are free to garbage collect resources removed from the Blob URL Store.

Each Blob must have an internal snapshot state, which must be initially set to the state of the underlying storage, if any such underlying storage exists, and must be preserved through StructuredClone. Further normative definition of snapshot state can be found for Files.

4.3.2. The close method

The close() method is said to close a Blob, and must act as follows:

  1. If the readability state of the context object is CLOSED, terminate this algorithm.
  2. Otherwise, set the readability state of the context object to CLOSED.
  3. If the context object has an entry in the Blob URL Store, remove the entry that corresponds to the context object.

If Blob object is passed to URL.createObjectURL(), call URL.revokeObjectURL() on Blob or File object, then call .close().

The revokeObjectURL(url) static method

Revokes the Blob URL provided in the string url by removing the corresponding entry from the Blob URL Store. This method must act as follows: 1. If the url refers to a Blob that has a readability state of CLOSED OR if the value provided for the url argument is not a Blob URL, OR if the value provided for the url argument does not have an entry in the Blob URL Store, this method call does nothing. User agents may display a message on the error console. 2. Otherwise, user agents must remove the entry from the Blob URL Store for url.

You can view the result of these calls by opening

chrome://blob-internals 

reviewing details of before and after calls which create Blob and close Blob.

For example, from

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Refcount: 1
Content Type: text/plain
Type: data
Length: 3

to

xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Refcount: 1
Content Type: text/plain

following call to .close(). Similarly from

blob:http://example.com/c2823f75-de26-46f9-a4e5-95f57b8230bd
Uuid: 29e430a6-f093-40c2-bc70-2b6838a713bc


An alternative approach could be to send file as an ArrayBuffer or chunks of array buffers. Then re-assemble the file at server.

Or you can call FileReader constructor, FileReader.prototype.readAsArrayBuffer(), and load event of FileReader each once.

At load event of FileReader pass ArrayBuffer to Uint8Array, use ReadableStream, TypedArray.prototype.subarray(), .getReader(), .read() to get N chunks of ArrayBuffer as a TypedArray at pull from Uint8Array. When N chunks equaling .byteLength of ArrayBuffer have been processed, pass array of Uint8Arrays to Blob constructor to recombine file parts into single file at browser; then send Blob to server.

<!DOCTYPE html>
<html>

<head>
</head>

<body>
  <input id="file" type="file">
  <br>
  <progress value="0"></progress>
  <br>
  <output for="file"><img alt="preview"></output>
  <script type="text/javascript">
    const [input, output, img, progress, fr, handleError, CHUNK] = [
      document.querySelector("input[type='file']")
      , document.querySelector("output[for='file']")
      , document.querySelector("output img")
      , document.querySelector("progress")
      , new FileReader
      , (err) => console.log(err)
      , 1024 * 1024
    ];

    progress.addEventListener("progress", e => {
      progress.value = e.detail.value;
      e.detail.promise();
    });

    let [chunks, NEXT, CURR, url, blob] = [Array(), 0, 0];

    input.onchange = () => {
      NEXT = CURR = progress.value = progress.max = chunks.length = 0;
      if (url) {
        URL.revokeObjectURL(url);
        if (blob.hasOwnProperty("close")) {
          blob.close();
        }
      }

      if (input.files.length) {
        console.log(input.files[0]);
        progress.max = input.files[0].size;
        progress.step = progress.max / CHUNK;
        fr.readAsArrayBuffer(input.files[0]);
      }

    }

    fr.onload = () => {
      const VIEW = new Uint8Array(fr.result);
      const LEN = VIEW.byteLength;
      const {type, name:filename} = input.files[0];
      const stream = new ReadableStream({
          pull(controller) {
            if (NEXT < LEN) {
              controller
              .enqueue(VIEW.subarray(NEXT, !NEXT ? CHUNK : CHUNK + NEXT));
               NEXT += CHUNK;
            } else {
              controller.close();
            }
          },
          cancel(reason) {
            console.log(reason);
            throw new Error(reason);
          }
      });

      const [reader, processData] = [
        stream.getReader()
        , ({value, done}) => {
            if (done) {
              return reader.closed.then(() => chunks);
            }
            chunks.push(value);
            return new Promise(resolve => {
              progress.dispatchEvent(
                new CustomEvent("progress", {
                  detail:{
                    value:CURR += value.byteLength,
                    promise:resolve
                  }
                })
              );                
            })
            .then(() => reader.read().then(data => processData(data)))
            .catch(e => reader.cancel(e))
        }
      ];

      reader.read()
      .then(data => processData(data))
      .then(data => {
        blob = new Blob(data, {type});
        console.log("complete", data, blob);
        if (/image/.test(type)) {
          url = URL.createObjectURL(blob);
          img.onload = () => {
            img.title = filename;
            input.value = "";
          }
          img.src = url;
        } else {
          input.value = "";
        }             
      })
      .catch(e => handleError(e))

    }
  </script>

</body>

</html>

plnkr http://plnkr.co/edit/AEZ7iQce4QaJOKut71jk?p=preview


You can also use utilize fetch()

fetch(new Request("/path/to/server/", {method:"PUT", body:blob}))

To transmit body for a request request, run these steps:

  1. Let body be request’s body.
  2. If body is null, then queue a fetch task on request to process request end-of-body for request and abort these steps.

  3. Let read be the result of reading a chunk from body’s stream.

    • When read is fulfilled with an object whose done property is false and whose value property is a Uint8Array object, run these substeps:

      1. Let bytes be the byte sequence represented by the Uint8Array object.
      2. Transmit bytes.

      3. Increase body’s transmitted bytes by bytes’s length.

      4. Run the above step again.

    • When read is fulfilled with an object whose done property is true, queue a fetch task on request to process request end-of-body for request.

    • When read is fulfilled with a value that matches with neither of the above patterns, or read is rejected, terminate the ongoing fetch with reason fatal.

See also

这篇关于Chrome 内存问题 - 文件 API + AngularJS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆