如何从表单上传 2GB+ 的大文件到 .NET Core API 控制器? [英] Howto upload big files 2GB+ to .NET Core API controller from a form?

查看:18
本文介绍了如何从表单上传 2GB+ 的大文件到 .NET Core API 控制器?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在通过 Postman 上传大文件时(从前端用 php 编写的表单,我遇到了同样的问题)我从 Azure Web 应用程序收到了 502 错误网关错误消息:

<块引用>

502 - Web 服务器在充当网关或代理服务器.您所在的页面有问题寻找,它无法显示.当 Web 服务器(同时作为网关或代理)联系上游内容服务器,它收到了来自内容服务器的无效响应.

我在 Azure 应用程序洞察中看到的错误:

<块引用>

Microsoft.AspNetCore.Connections.ConnectionResetException:客户端已断开连接 <--- 尝试对不存在的设备进行操作网络连接.(来自 HRESULT 的异常:0x800704CD)

尝试上传 2GB 测试文件时发生这种情况.使用 1GB 文件时,它可以正常工作,但需要高达约 5GB.

我使用块写入方法优化了将文件流写入 azure blob 存储的部分(归功于:

我设置了 maxAllowedContentLength &web.config 中的 requestTimeout 已用于测试:

<块引用>

requestLimits maxAllowedContentLength="4294967295"

<块引用>

aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%"stdoutLogEnabled="false" stdoutLogFile=".logsstdout"requestTimeout="00:59:59" hostingModel="InProcess"

解决方案

如果你想上传一个大的 blob 文件到 Azure 存储,从你的后端获取一个 SAS 令牌并直接从客户端上传这个文件会更好我认为它不会增加您的后端工作量.您可以使用下面的代码为您的客户获取具有写权限的 SAS 令牌,仅为您的客户提供 2 小时:

 var containerName = "<容器名称>";var accountName = "<存储帐户名称>";var key = "<存储账户密钥>";var cred = new StorageCredentials(accountName, key);var account = new CloudStorageAccount(cred,true);var container = account.CreateCloudBlobClient().GetContainerReference(containerName);var writeOnlyPolicy = 新 SharedAccessBlobPolicy() {SharedAccessStartTime = DateTime.Now,SharedAccessExpiryTime = DateTime.Now.AddHours(2),权限 = SharedAccessBlobPermissions.Write};var sas = container.GetSharedAccessSignature(writeOnlyPolicy);

获得此 sas 令牌后,您可以使用它在客户端通过

注意:在这种情况下,请确保您已为您的存储帐户启用 CORS,以便 statc html 可以向 Azure 存储服务发布请求.

希望有帮助.

While uploading a big file via Postman (from a frontend with form written in php I have the same issue) I am getting a 502 bad gateway error message back from the Azure Web App:

502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.

The error I see in Azure application insights:

Microsoft.AspNetCore.Connections.ConnectionResetException: The client has disconnected <--- An operation was attempted on a nonexistent network connection. (Exception from HRESULT: 0x800704CD)

This is happening while trying to upload a 2GB test file. With a 1GB file it is working fine but it needs to work up to ~5GB.

I have optimized the part which is writing the file streams to azure blob storage by using a block write approach (credits to: https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/azure-blob-storage-part-4-uploading-large-blobs/) but for me it looks like that the connection is being closed to the client (to postman in this case) as this seems to be a single HTTP POST request and underlying Azure network stack (e.g. load balancer) is closing the connection as it takes to long until my API provides back the HTTP 200 OK for the HTTP POST request.

Is my assumption correct? If yes, how can achieve that the upload from my frontend (or postman) is happening in chunks (e.g. 15MB) which then can be acknowledged by the API in a faster way than the whole 2GB? Even creating a SAS URL for uploading to azure blob and returning the URL back to the browser would be fine but not sure how I can integrate that easily - also there are max block sizes afaik, so for a 2GB I would probably need to create multiple blocks. If this is the suggestion it would be great to get a good sample here BUT also other ideas are welcome!

This is the relevant part in my API controller endpoint in C# .Net Core 2.2:

        [AllowAnonymous]
            [HttpPost("DoPost")]
            public async Task<IActionResult> InsertFile([FromForm]List<IFormFile> files, [FromForm]string msgTxt)
            {
                 ...

                        // use generated container name
                        CloudBlobContainer container = blobClient.GetContainerReference(SqlInsertId);

                        // create container within blob
                        if (await container.CreateIfNotExistsAsync())
                        {
                            await container.SetPermissionsAsync(
                                new BlobContainerPermissions
                                {
                                    // PublicAccess = BlobContainerPublicAccessType.Blob
                                    PublicAccess = BlobContainerPublicAccessType.Off
                                }
                                );
                        }

                        // loop through all files for upload
                        foreach (var asset in files)
                        {
                            if (asset.Length > 0)
                            {

                                // replace invalid chars in filename
                                CleanFileName = String.Empty;
                                CleanFileName = Utils.ReplaceInvalidChars(asset.FileName);

                                // get name and upload file
                                CloudBlockBlob blockBlob = container.GetBlockBlobReference(CleanFileName);


                                // START of block write approach

                                //int blockSize = 256 * 1024; //256 kb
                                //int blockSize = 4096 * 1024; //4MB
                                int blockSize = 15360 * 1024; //15MB

                                using (Stream inputStream = asset.OpenReadStream())
                                {
                                    long fileSize = inputStream.Length;

                                    //block count is the number of blocks + 1 for the last one
                                    int blockCount = (int)((float)fileSize / (float)blockSize) + 1;

                                    //List of block ids; the blocks will be committed in the order of this list 
                                    List<string> blockIDs = new List<string>();

                                    //starting block number - 1
                                    int blockNumber = 0;

                                    try
                                    {
                                        int bytesRead = 0; //number of bytes read so far
                                        long bytesLeft = fileSize; //number of bytes left to read and upload

                                        //do until all of the bytes are uploaded
                                        while (bytesLeft > 0)
                                        {
                                            blockNumber++;
                                            int bytesToRead;
                                            if (bytesLeft >= blockSize)
                                            {
                                                //more than one block left, so put up another whole block
                                                bytesToRead = blockSize;
                                            }
                                            else
                                            {
                                                //less than one block left, read the rest of it
                                                bytesToRead = (int)bytesLeft;
                                            }

                                            //create a blockID from the block number, add it to the block ID list
                                            //the block ID is a base64 string
                                            string blockId =
                                              Convert.ToBase64String(ASCIIEncoding.ASCII.GetBytes(string.Format("BlockId{0}",
                                                blockNumber.ToString("0000000"))));
                                            blockIDs.Add(blockId);
                                            //set up new buffer with the right size, and read that many bytes into it 
                                            byte[] bytes = new byte[bytesToRead];
                                            inputStream.Read(bytes, 0, bytesToRead);

                                            //calculate the MD5 hash of the byte array
                                            string blockHash = Utils.GetMD5HashFromStream(bytes);

                                            //upload the block, provide the hash so Azure can verify it
                                            blockBlob.PutBlock(blockId, new MemoryStream(bytes), blockHash);

                                            //increment/decrement counters
                                            bytesRead += bytesToRead;
                                            bytesLeft -= bytesToRead;
                                        }

                                        //commit the blocks
                                        blockBlob.PutBlockList(blockIDs);

                                    }
                                    catch (Exception ex)
                                    {
                                        System.Diagnostics.Debug.Print("Exception thrown = {0}", ex);
                                        // return BadRequest(ex.StackTrace);
                                    }
                                }

                                // END of block write approach
...

And this is a sample HTTP POST via Postman:

I set maxAllowedContentLength & requestTimeout in web.config for testing already:

requestLimits maxAllowedContentLength="4294967295"

and

aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".logsstdout" requestTimeout="00:59:59" hostingModel="InProcess"

解决方案

If you want to upload a large blob file to Azure storage, get an SAS token from your backend and upload this file from client-side directly will be a better soultion I think as it will not add your backend workload . You can use code below to get a SAS token with write permission for 2 hours only for your client :

    var containerName = "<container name>";
    var accountName = "<storage account name>";
    var key = "<storage account key>";
    var cred = new StorageCredentials(accountName, key);
    var account = new CloudStorageAccount(cred,true);
    var container = account.CreateCloudBlobClient().GetContainerReference(containerName);

    var writeOnlyPolicy = new SharedAccessBlobPolicy() { 
        SharedAccessStartTime = DateTime.Now,
        SharedAccessExpiryTime = DateTime.Now.AddHours(2),
        Permissions = SharedAccessBlobPermissions.Write
    };

    var sas = container.GetSharedAccessSignature(writeOnlyPolicy);

After you get this sas token, you can use it to upload files by storage JS SDK on your client-side. This is a html sample :

<!DOCTYPE html> 
<html> 
<head> 
    <title> 
        upload demo
    </title> 

    <script src= 
"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> 
    </script> 


    <script src= "./azure-storage-blob.min.js"> </script> 
</head> 

<body> 
    <div align="center"> 
        <form method="post" action="" enctype="multipart/form-data"
                id="myform"> 

            <div > 
                <input type="file" id="file" name="file" /> 
                <input type="button" class="button" value="Upload"
                        id="but_upload"> 
            </div> 
        </form> 
        <div id="status"></div>


    </div>   

    <script type="text/javascript"> 
        $(document).ready(function() { 


            var sasToken = '?sv=2018-11-09&sr=c&sig=XXXXXXXXXXXXXXXXXXXXXXXXXOuqHSrH0Fo%3D&st=2020-01-27T03%3A58%3A20Z&se=2020-01-28T03%3A58%3A20Z&sp=w'
            var containerURL = 'https://stanstroage.blob.core.windows.net/container1/'


            $("#but_upload").click(function() { 

                var file = $('#file')[0].files[0]; 
                const container = new azblob.ContainerURL(containerURL + sasToken, azblob.StorageURL.newPipeline(new azblob.AnonymousCredential));
                try {
                    $("#status").wrapInner("uploading .... pls wait");


                    const blockBlobURL = azblob.BlockBlobURL.fromContainerURL(container, file.name);
                    var result  = azblob.uploadBrowserDataToBlockBlob(
                            azblob.Aborter.none, file, blockBlobURL);

                    result.then(function(result) {
                        document.getElementById("status").innerHTML = "Done"
                        }, function(err) {
                            document.getElementById("status").innerHTML = "Error"
                            console.log(err); 
                        });


                } catch (error) {
                    console.log(error);
                }


            });
        }); 
    </script> 
</body> 

</html> 

I uploaded a 3.6GB .zip file for 20 mins and it works perfectly for me, sdk will open mutiple threads and upload your large file part by part:

Note: in this case ,pls make sure you have enabled CORS for your storage account so that statc html could post requests to Azure storage service.

Hope it helps.

这篇关于如何从表单上传 2GB+ 的大文件到 .NET Core API 控制器?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆