AWS S3 GetObject是否提供随机访问? [英] Does AWS S3 GetObject provide random access?

查看:220
本文介绍了AWS S3 GetObject是否提供随机访问?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我可以提供HTTP范围标头来AWS S3的GetObject来请求对象的指定字节范围.

I can provide HTTP Range headers to AWS S3's GetObject to request a specified range of bytes of an object.

这是真正的随机访问吗,还是S3必须在该范围之前处理所有对象,然后返回我请求的范围?

Is it truly random access, or does S3 have to process all of the object before that range before returning my requested range?

范围标头只是在减少传输的字节,还是还提供了有效的随机访问?

Is the range header simply reducing the bytes transferred, or does it also provide efficient random access?

推荐答案

我对2GB的文件进行了快速测试,并在文件的各个偏移量(包括开始,中间和结尾)处执行了8字节的远程获取.从我的Mac到us-east-1,time测量的总时间似乎与250ms用户时间(包括启动node.js,加载程序包,执行范围GetObject)相当一致.

I did a quick test with a 2GB file and executed ranged gets for 8 bytes at various offsets in the file (including start, middle, and end). The total time seemed to be pretty consistent at 250ms user time (including starting node.js, loading packages, executing range GetObject), as measured by time from my Mac to us-east-1.

我无法在AWS文档中找到关于此处预期行为的明确声明(尽管我希望并希望它接近O(1)恒定时间).

I wasn't able to find a definitive statement in the AWS documentation for the expected behavior here (though I'd hope and expect that it is close to O(1) constant time).

在鼓励您进行设计之前,我建议您进一步调查.也许在这里更新我们.

I'd encourage you to investigate further before committing to a design. And maybe update us here.

[更新]这是稍微更广泛的实验的结果. S3,Lambda,一个2gb文件,以及100个字节的100次读取,随机读取文件的一部分:

[Update] Here's the results of a slightly more extensive experiment. S3, Lambda, a 2gb file, and 100 reads of 100 bytes to random parts of the file:

这篇关于AWS S3 GetObject是否提供随机访问?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆