AWS S3保存和加载项目需要多长时间? [英] How long does it take for AWS S3 to save and load an item?

查看:101
本文介绍了AWS S3保存和加载项目需要多长时间?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

S3 FAQ提到:所有区域中的Amazon S3存储桶都为新对象的PUTS提供写后读取一致性,并为覆盖PUTS和DELETES最终提供一致性."但是,我不知道要花多长时间才能达到最终的一致性.我试图进行搜索,但是在S3文档中找不到答案.

情况:

我们的网站包含7个步骤.当用户在每个步骤中单击保存时,我们希望将json文档(包含所有7个步骤的信息)保存到Amazon S3.目前,我们计划:

  1. 创建一个S3存储桶以存储所有json文档.
  2. 当用户保存第1步时,我们将在S3中创建一个新项目.
  3. 当用户保存第2-7步时,我们将覆盖现有项目.
  4. 用户保存步骤并刷新页面后,他应该能够看到刚刚保存的信息.即,我们要确保在写入后始终阅读.

完整的json文档(所有7个步骤已完成)约为20 KB. 用户单击保存"按钮后,我们可以将页面冻结一段时间,直到保存完成后,他们才能进行其他更改.

问题:

  1. AWS S3保存和加载项目需要多长时间? (当文档保存到S3时,我们可以冻结我们的网站)
  2. 是否具有根据商品尺寸计算保存/加载时间的功能?
  3. 如果我选择另一个S3区域,保存/加载时间会有所不同吗?如果是这样,哪个是西雅图最好的地区?

解决方案

我想添加到@ error2007的答案中.

AWS S3保存和加载项目需要多长时间? (当文档保存到S3时,我们可以冻结我们的网站)

不仅您不会在任何地方找到确切的时间-实际上没有确切的时间.这就是最终一致性"的全部含义:最终将实现一致性.你不知道什么时候.

如果有人给您一个系统达到一致性所需的时间上限,那么您将不再称它为最终一致性".这将是在X时间内一致".


现在的问题变成了如何处理最终的一致性?" (而不是试图击败它")

要真正找到该问题的答案,您需要首先了解您真正需要哪种一致性,以及S3的最终一致性对您的工作流程有多大影响.

根据您的描述,我了解到您将向S3总共写入7次,每执行一次就写入一次.对于第一个文章,您正确引用了FAQ,之后的所有阅读都具有很强的一致性.对于所有后续写入(实际上是在替换"原始对象),您可能会观察到最终的一致性-也就是说,如果尝试读取覆盖的对象,则可能会获得最新版本,或者可能会获得旧版本. .在这种情况下,这就是所谓的S3上的最终一致性".

一些可供您考虑的替代方法:

  • 不要在每个步骤中都写入S3.取而代之的是,将每个步骤的数据保留在客户端,然后在第7步之后仅将一个对象写入S3.这样,只有1次写入,没有覆盖",因此也没有最终一致性".对于您的特定情况,这可能是不可能的,您需要对此进行评估.

  • 或者,对于每个步骤,以不同的名称写入S3对象.例如,类似于:在第1步之后,将其保存到bruno-preferences-step-1.json;然后,在步骤2之后,将结果保存到bruno-preferences-step-2.json;依此类推,然后将最终偏好设置文件保存到bruno-preferences.json,甚至保存到bruno-preferences-step-7.json,从而为自己提供了以后添加更多步骤的灵活性.请注意,此处的想法是避免覆盖,否则可能会导致最终的一致性问题.使用这种方法,您只编写新对象,而不会覆盖它们.

  • 最后,您可能需要考虑 Amazon DynamoDB .这是一个NoSQL数据库,您可以直接从浏览器或服务器安全地连接到该数据库.它为您提供复制,自动缩放,负载分配(就像S3一样).您还可以选择告诉DynamoDB您要执行高度一致的读取(默认最终是一致读取;您必须更改参数才能获得高度一致的读取). DynamoDB通常用于小型"记录,20kB肯定在该范围内-到今天为止,记录的最大大小将为400kB.您可能想检查一下: DynamoDB常见问题解答:什么是Amazon DynamoDB一致性模型? ?

S3 FAQ mentions that "Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES." However, I don't know how long it takes to get eventual consistency. I tried to search for this but couldn't find an answer in S3 documentation.

Situation:

We have a website consists of 7 steps. When user clicks on save in each step, we want to save a json document (contains information of all 7 steps) to Amazon S3. Currently we plan to:

  1. Create a single S3 bucket to store all json documents.
  2. When user saves step 1 we create a new item in S3.
  3. When user saves step 2-7 we override the existing item.
  4. After user saves a step and refresh the page, he should be able to see the information he just saved. i.e. We want to make sure that we always read after write.

The full json document (all 7 steps completed) is around 20 KB. After users clicked on save button we can freeze the page for some time and they cannot make other changes until save is finished.

Question:

  1. How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)
  2. Is there a function to calculate save/load time based on item size?
  3. Is the save/load time gonna be different if I choose another S3 region? If so which is the best region for Seattle?

解决方案

I wanted to add to @error2007s answers.

How long does it take for AWS S3 to save and load an item? (We can freeze our website when document is being saved to S3)

It's not only that you will not find the exact time anywhere - there's actually no such thing exact time. That's just what "eventual consistency" is all about: consistency will be achieved eventually. You can't know when.

If somebody gave you an upper bound for how long a system would take to achieve consistency, then you wouldn't call it "eventually consistent" anymore. It would be "consistent within X amount of time".


The problem now becomes, "How do I deal with eventual consistency?" (instead of trying to "beat it")

To really find the answer to that question, you need to first understand what kind of consistency you truly need, and how exactly the eventual consistency of S3 could affect your workflow.

Based on your description, I understand that you would write a total of 7 times to S3, once for each step you have. For the first write, as you correctly cited the FAQs, you get strong consistency for any reads after that. For all the subsequent writes (which are really "replacing" the original object), you might observe eventual consistency - that is, if you try to read the overwritten object, you might get the most recent version, or you might get an older version. This is what is referred to as "eventual consistency" on S3 in this scenario.

A few alternatives for you to consider:

  • don't write to S3 on every single step; instead, keep the data for each step on the client side, and then only write 1 single object to S3 after the 7th step. This way, there's only 1 write, no "overwrites", so no "eventual consistency". This might or might not be possible for your specific scenario, you need to evaluate that.

  • alternatively, write to S3 objects with different names for each step. E.g., something like: after step 1, save that to bruno-preferences-step-1.json; then, after step 2, save the results to bruno-preferences-step-2.json; and so on, then save the final preferences file to bruno-preferences.json, or maybe even bruno-preferences-step-7.json, giving yourself the flexibility to add more steps in the future. Note that the idea here to avoid overwrites, which could cause eventual consistency issues. Using this approach, you only write new objects, you never overwrite them.

  • finally, you might want to consider Amazon DynamoDB. It's a NoSQL database, you can securely connect to it directly from the browser or from your server. It provides you with replication, automatic scaling, load distribution (just like S3). And you also have the option to tell DynamoDB that you want to perform strongly consistent reads (the default is eventually consistent reads; you have to change a parameter to get strongly consistent reads). DynamoDB is typically used for "small" records, 20kB is definitely within the range -- the maximum size of a record would be 400kB as of today. You might want to check this out: DynamoDB FAQs: What is the consistency model of Amazon DynamoDB?

这篇关于AWS S3保存和加载项目需要多长时间?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆