Blob存储到fileShare大文件传输中的问题:使用fileRange(nodejs Cloud函数).传输部分文件 [英] Issue in blob storage to fileShare big file transfer : Using fileRange (nodejs Cloud function). It transferring partial files

查看:101
本文介绍了Blob存储到fileShare大文件传输中的问题:使用fileRange(nodejs Cloud函数).传输部分文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Blob存储中向fileShare大文件传输发出问题:使用fileRange(nodejs Cloud函数).它会传输部分文件.

当我们传输大小为10MB的文件时-仅传输9.7MB当我们传输大小为50MB的文件时-仅传输49.5MB

它产生以下问题:堆栈:RangeError:contentLength必须为>0和< = 4194304字节

代码片段:

  const fileName = path.basename('master/test/myTestXml.xml')const fileClient = directoryClient.getFileClient(fileName);const fileContent =等待streamToString(downloadBlockBlobResponse.visibleStreamBody)等待fileClient.uploadRange(fileContent,0,fileContent.length,{rangeSize:50 * 1024 * 1024,//4MB范围大小并行度:20,//20并发onProgress:(ev)=>console.log(ev)}); 

传输部分文件后,它会给出错误-任何建议:我们如何使用rangeSize传输大文件.
堆栈:RangeError:contentLength必须>0和< = 4194304字节

解决方案

如果仅要将某些文件从Azure blob存储传输到Azure文件共享,则可以使用

我复制了一个约11 MB的文件,大约用了5秒钟.

Issue in blob storage to fileShare big file transfer : Using fileRange (nodejs Cloud function). It transferring partial files.

When we transfer file of size 10MB - it transfers only 9.7MB When we transfer file of size 50MB - it transfers only 49.5MB

It gives issues that: Stack: RangeError: contentLength must be > 0 and <= 4194304 bytes

Code snnipet:

const fileName = path.basename('master/test/myTestXml.xml')
const fileClient = directoryClient.getFileClient(fileName);
const fileContent = await streamToString(downloadBlockBlobResponse.readableStreamBody)
await fileClient.uploadRange(fileContent, 0,fileContent.length,{
    rangeSize: 50 * 1024 * 1024, // 4MB range size
    parallelism: 20, // 20 concurrency
    onProgress: (ev) => console.log(ev)
  });

After transferring partial file it give error - any suggestion: how can we transfer big files using rangeSize.
Stack: RangeError: contentLength must be > 0 and <= 4194304 bytes

解决方案

If you just want to transfer some files from Azure blob storage to Azure File share, you can generate a blob URL with SAS token on your server-side and use the startCopyFromURL function to let your File share copy this file instead of downloading from Azure blob storage and upload to file share. It eases the pressure of your server and the copy process is quick as it uses Azure's internal network.

Just try code below:

const { ShareServiceClient } = require("@azure/storage-file-share");
const storage = require('azure-storage');

const connStr = "";
const shareName = "";
const sharePath = "";

const srcBlobContainer ="";
const srcBlob="";

const blobService = storage.createBlobService(connStr);

// Create a SAS token that expires in 1 hour
// Set start time to five minutes ago to avoid clock skew.
var startDate = new Date();
startDate.setMinutes(startDate.getMinutes() - 5);
var expiryDate = new Date(startDate);
expiryDate.setMinutes(startDate.getMinutes() + 60);

//grant read permission
permissions = storage.BlobUtilities.SharedAccessPermissions.READ;

var sharedAccessPolicy = {
    AccessPolicy: {
        Permissions: permissions,
        Start: startDate,
        Expiry: expiryDate
    }
};

var srcBlobURL = blobService.getUrl(srcBlobContainer,srcBlob)
var sasToken = blobService.generateSharedAccessSignature(srcBlobContainer, srcBlob, sharedAccessPolicy)
var srcCopyURL = srcBlobURL + "?" + sasToken

const serviceClient = ShareServiceClient.fromConnectionString(connStr);
const fileClient = serviceClient.getShareClient(shareName).getDirectoryClient(sharePath).getFileClient(srcBlob);

fileClient.startCopyFromURL(srcCopyURL).then(function(){console.log("done")})

I have tested on my side in my storage account, result:

I copy a file with about 11 MB, it spends about 5s.

这篇关于Blob存储到fileShare大文件传输中的问题:使用fileRange(nodejs Cloud函数).传输部分文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆