从PHP上传性能(在管道上只能达到10mb/s可以达到123mb/s) [英] Upload performance from PHP (Can only get 10mb/s could get 123mb/s on the pipe)

查看:79
本文介绍了从PHP上传性能(在管道上只能达到10mb/s可以达到123mb/s)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

你好

 我正在更新FileSender docs.filesender.org项目中的云存储支持,以允许在后端使用Azure来存储用户文件数据. FileSender允许用户上传和共享文件,并且被许多人使用 国家研究和教育网络(span).

  I am updating the cloud storage support in FileSender docs.filesender.org project to allow the use of Azure in the back end to store user file data. FileSender allows users to upload and share files and is used by many National Research and Education Network (NREN)s.

 我具有使用PHP API(https://github.com/Azure/azure-storage-php)的功能.虽然我的本地测试很好,但主要受网络速度的限制,但在具有快速网络硬件性能的大型安装中使用时 不如硬件所能提供的那样.

  I have the functionality working using the PHP API (https://github.com/Azure/azure-storage-php). While my local testing has been fine, mostly limited by the speed of my network, when used on larger installations with fast network hardware performance is not as good as the hardware can offer.

  FileSender处理块"中的上载内容. sysadmin可配置大小的集合. (默认为5mb).这提供了许多可能性,包括上载恢复以及重新启动后上载恢复等.这些块使它们成为第一个版本的服务器端. 代码使用 createBlockBlob https://github.com/filesender/filesender/blob/master/classes/storage/StorageCloudAzure.class.php#L142

  FileSender handles uploads in "chunks" of sysadmin configurable sizes. (default 5mb). This allows many possibilities including upload resume, and upload resume after reboots etc. Those chunks make their way server side where the first version of the code is uploading them to Azure using createBlockBlob https://github.com/filesender/filesender/blob/master/classes/storage/StorageCloudAzure.class.php#L142

  当文件发送器块大小增加到10mb时,向Azure的上传速度可以达到10/11mb/s.使用20个较大的块似乎会降低速度.我有一个小的测试脚本来尝试找出如何在不完全实施的情况下提高速度 变化.探索块的大小可以在1.6秒内上传200mb的块,尽管我还没有看到那种使线性块变大=在主代码路径中使速度更好的关系(都使用createBlockBlob).

   When the filesender chunk size is increased to 10mb then upload speed to Azure can reeach 10/11mb/s. Using larger chunks of 20 seems to reduce speed. I have a small test script to try to work out how to improve speed without fully implementing changes. Exploring the chunk size there one can upload a 200mb chunk in 1.6s though I haven't seen the sort of linear make chunks larger = make speed better relationship in the main code path (both using createBlockBlob).

 我希望有人提出一些建议,如果我在服务器上卸载数据时可以将文件分块存储在Azure中,该服务器可以向其提供可配置的数据块,并且如果有机会可以以120mb/s的速度达到Azure.

  I am hoping somebody has some recommendations for storing a file in chunks in Azure if I am offloading data at a server that can have configurable chunks of data given to it and that can reach Azure at 120mb/s if given the chance.

推荐答案

如果您要分块上传文件,然后仅在文件完全上传后才提交分块(这样其他人只能看到 完整/更新的文件),则应查看块Blob.如果您不关心事务语义,并且愿意让部分上传的文件对其他人可见,那么您也可以使用页面Blob或文件.

If you want to upload files in chunks and then only commit the chunks when the file is fully uploaded (so that others only see the full/updated file), you should look at block blobs.  If you don’t care about the transaction semantics and is willing to have partially uploaded files visible to others, then you can also use page blobs or files.

The优化上传性能的关键是分块和并行化.您已经注意到您需要 选择大块但不能太大的块.小块是低效的,因为它们有太多的开销.但是,很大的块更有可能出现暂时性错误,需要重新发送.中型块是最好的,因为它们 具有最小的开销,并且重新发送它们的成本相对较小. (小,中和大的构成取决于您的需求)

The critical thing to optimizing upload performance is chunking and parallelization.  You are already noticed that you need to pick a chunk size which is big, but not too big.  Small chunks are inefficient because they have too much overhead; however, very large chunks are more likely to have a transient error and need to be resent.  Medium size chunks are best as they have minimal overhead and the cost to resend them is relatively small.  (What constitutes small, medium, and large depends on your needs)

Azure处理大量上传大量数据的系统.根据我们的理解,您想 并行化代码以一次上传多个块.但是,大多数Azure工具(AzCopy,Azure Storage Explorer)和Azure库都内置了一定程度的并行化.可能需要一些调整才能获得正确的块数 要并行上传,再次根据您的需求,  数据大小,网络管道和正在执行上传的系统而有所不同;但是,每个Blob可能要在4到20个块之间进行上传(因此,如果您愿意 一次要上传10个Blob,则每个Blob可以同时上传16个块,或者一次可以进行约160个并行块上传.)

Azure handles extremely large numbers of systems uploading large amounts of data.  As per our understanding you to want to parallelize the code to upload multiple chunks at a time. however, most of the Azure Tools (AzCopy, Azure Storage Explorer) and Azure libraries have some level of parallelization built in.  Some tuning may be required to get the right number of chunks to upload in parallel, again it's going to vary based on your needs,  data sizes, network pipeline, and the system that's doing the uploading; however, it's probably somewhere between 4 to 20 chunks per blob for uploading (so if you wants to upload 10 blobs at a time, you could have 16 chunks per blob uploading  at the same time, or ~160 parallel chunk uploads going on at once.)

-------------- -------------------------------------------------- ----------------------------------

如果此答案有帮助,点击"标记为答案"或 投票.要提供有关您的论坛体验的其他反馈,请单击 此处

If this answer was helpful, click "Mark as Answer" or Up-Vote. To provide additional feedback on your forum experience, click here


这篇关于从PHP上传性能(在管道上只能达到10mb/s可以达到123mb/s)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆