S3-前缀到底是什么?什么速率限制适用? [英] S3 - What Exactly Is A Prefix? And what Ratelimits apply?

查看:549
本文介绍了S3-前缀到底是什么?什么速率限制适用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否有人确切知道s3前缀是什么以及它如何与亚马逊的 >

我的解释方式 amazon的文档向我暗示情况确实如此,并且扁平结构将被视为单个前缀". (即受上面公布的费率限制)

假设您的存储桶(由管理员创建)具有四个对象, 以下对象键:

Development/Projects1.xls

财务/声明1.pdf

私人/taxdocument.pdf

s3-dg.pdf

s3-dg.pdf键没有前缀,因此它的对象出现 直接在存储桶的根级别.如果您打开开发/ 文件夹,您会在其中看到Projects.xlsx对象.

在上面的示例中,s3-dg.pdf会受到与其他每个前缀(开发/财务/私人)不同的速率限制(5500 GET请求/秒)吗?


更令人困惑的是,我读过一些关于亚马逊的博客,其中使用前N个字节作为分区键,并鼓励使用高基数前缀,我只是不确定它如何与带有平文件结构".

解决方案

您是对的,该声明似乎与自己矛盾.只是写的不正确,但是信息是正确的.简而言之:

  1. 每个前缀每秒最多可以处理3,500/5,500个请求,因此出于许多目的,假设是您不需要使用多个前缀.
  2. 前缀被视为对象位置的整个路径(直到最后一个'/'),并且不再仅由前6-8个字符进行哈希处理.因此,仅在任何两个文件夹"之间拆分数据就足以实现每秒x2的最大请求就足够了. (如果请求在两者之间平均分配)

作为参考,以下是AWS支持人员对我的澄清请求的答复:

你好奥伦,

感谢您联系AWS支持.

我了解您阅读了有关S3请求率性能的AWS帖子 被增加,您对此还有其他疑问 公告.

在此升级之前,S3每秒钟支持100个PUT/LIST/DELETE请求 秒和每秒300个GET请求.为了获得更高的性能, 必须实现随机散列/前缀模式.从去年开始 请求速率限制增加到3500个PUT/POST/DELETE和5500个 每秒GET请求.这种增加通常足以 减少503 SlowDown错误的应用程序,而不必 随机化前缀.

但是,如果新的限制还不够,则需要使用前缀 使用.前缀没有固定数量的字符.它是任何字符串 桶名称和对象名称之间的值,例如:

  • 存储桶/文件夹1/子1/文件
  • 存储桶/文件夹1/子2/文件
  • 存储桶/1/文件
  • 存储桶/2/文件

对象文件"的前缀为:/folder1/sub1//folder1/sub2//1//2/.在此示例中,如果您传播阅读 均匀地跨越所有四个前缀,则每个请求可以实现22,000个请求 第二.

I was wondering if anyone knew what exactly an s3 prefix was and how it interacts with amazon's published s3 rate limits:

Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket.

While that's really clear I'm not quite certain what a prefix is?

Does a prefix require a delimiter?

If we have a bucket where we store all files at the "root" level (completely flat, without any prefix/delimters) does that count as single "prefix" and is it subject to the rate limits posted above?

The way I'm interpreting amazon's documentation suggests to me that this IS the case, and that the flat structure would be considered a single "prefix". (ie it would be subject to the published rate limits above)

Suppose that your bucket (admin-created) has four objects with the following object keys:

Development/Projects1.xls

Finance/statement1.pdf

Private/taxdocument.pdf

s3-dg.pdf

The s3-dg.pdf key does not have a prefix, so its object appears directly at the root level of the bucket. If you open the Development/ folder, you see the Projects.xlsx object in it.

In the above example would s3-dg.pdf be subject to a different rate limit (5500 GET requests /second) than each of the other prefixes (Development/Finance/Private)?


What's more confusing is I've read a couple of blogs about amazon using the first N bytes as a partition key and encouraging about using high cardinality prefixes, I'm just not sure how that interacts with a bucket with a "flat file structure".

解决方案

You're right, the announcement seems to contradict itself. It's just not written properly, but the information is correct. In short:

  1. Each prefix can achieve up to 3,500/5,500 requests per second, so for many purposes, the assumption is that you wouldn't need to use several prefixes.
  2. Prefixes are considered to be the whole path (up to the last '/') of an object's location, and are no longer hashed only by the first 6-8 characters. Therefore it would be enough to just split the data between any two "folders" to achieve x2 max requests per second. (if requests are divided evenly between the two)

For reference, here is a response from AWS support to my clarification request:

Hello Oren,

Thank you for contacting AWS Support.

I understand that you read AWS post on S3 request rate performance being increased and you have additional questions regarding this announcement.

Before this upgrade, S3 supported 100 PUT/LIST/DELETE requests per second and 300 GET requests per second. To achieve higher performance, a random hash / prefix schema had to be implemented. Since last year the request rate limits increased to 3,500 PUT/POST/DELETE and 5,500 GET requests per second. This increase is often enough for applications to mitigate 503 SlowDown errors without having to randomize prefixes.

However, if the new limits are not sufficient, prefixes would need to be used. A prefix has no fixed number of characters. It is any string between a bucket name and an object name, for example:

  • bucket/folder1/sub1/file
  • bucket/folder1/sub2/file
  • bucket/1/file
  • bucket/2/file

Prefixes of the object 'file' would be: /folder1/sub1/ , /folder1/sub2/, /1/, /2/. In this example, if you spread reads across all four prefixes evenly, you can achieve 22,000 requests per second.

这篇关于S3-前缀到底是什么?什么速率限制适用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆