Terraform:用于多租户的状态管理 [英] Terraform: state management for multi-tenancy

查看:228
本文介绍了Terraform:用于多租户的状态管理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于我们正在评估Terraform以(部分地)替代多租户SaaS的Ansible调配过程,因此我们能够处理基础架构变更(添加/删除),从而意识到Terraform的便利性,性能和可靠性。

As we're in progress of evaluating Terraform to replace (partially) our Ansible provisioning process for a multi-tenancy SaaS, we realize the convenience, performance and reliability of Terraform as we can handle the infrastructure change (adding/removing) smoothly, keeping track of infra state (that's very cool).

我们的应用程序是多租户SaaS,我们为客户提供单独的实例-在Ansible中,我们拥有拥有动态库存(与EC2动态库存完全相同)。我们浏览了许多Terraform书籍/教程和最佳做法,其中许多建议多环境状态应分别进行管理。在Terraform中远程运行,但是它们看起来都像静态env(例如Dev / Staging / Prod)。

Our application is a multi-tenancy SaaS which we provision separate instances for our customers - in Ansible we have our own dynamic inventory (quite the same as EC2 dynamic inventory). We go through lots of Terraform books/tutorials and best practices where many suggest that multi environment states should be managed separately & remotely in Terraform, but all of them look like static env (like Dev/Staging/Prod).

是否有最佳实践或为多租户应用管理动态状态清单的实际示例?我们想跟踪每个客户实例集的状态-轻松地填充实例的更改。

Is there any best practice or real example of managing dynamic inventory of states for multi-tenancy apps? We would like to track state of each customer set of instances - populate changes to them easily.

一种方法可能是我们为每个客户创建一个目录并放置* .tf内部脚本,这些脚本将调用我们在全球某个地方托管的模块。可以将状态文件放入S3,这样我们就可以根据需要向每个客户填充更改。

One approach might be we create a directory for each customer and place *.tf scripts inside, which will call to our module hosted somewhere global. State files might be put to S3, this way we can populate changes to each individual customer if needed.

推荐答案

Terraform适用于文件夹级别,拉入所有 .tf 文件(默认情况下为 terraform.tfvars 文件)。

Terraform works on a folder level, pulling in all .tf files (and by default a terraform.tfvars file).

所以我们做的事情类似于安东答案,但省去了用sed模板化模板的复杂性。因此,作为一个基本示例,您的结构可能如下所示:

So we do something similar to Anton's answer but do away with some complexity around templating things with sed. So as a basic example your structure might look like this:

$ tree -a --dirsfirst
.
├── components
│   ├── application.tf
│   ├── common.tf
│   ├── global_component1.tf
│   └── global_component2.tf
├── modules
│   ├── module1
│   ├── module2
│   └── module3
├── production
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│       ├── common.tf -> ../../components/common.tf
│       ├── global_component1.tf -> ../../components/global_component1.tf
│       ├── global_component2.tf -> ../../components/global_component2.tf
│       └── terraform.tfvars
├── staging
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│       ├── common.tf -> ../../components/common.tf
│       ├── global_component1.tf -> ../../components/global_component1.tf
│       └── terraform.tfvars
├── apply.sh
├── destroy.sh
├── plan.sh
└── remote.sh

在这里运行计划/应用/销毁从包装程序外壳脚本处理诸如进入目录并运行 terraform get -update = true 但也运行 terraform init的根目录的根级开始表示文件夹,因此您可以获得S3的唯一状态文件密钥,从而可以独立跟踪每个文件夹的状态。

Here you run your plan/apply/destroy from the root level where the wrapper shell scripts handle things like cd'ing into the directory and running terraform get -update=true but also running terraform init for the folder so you get a unique state file key for S3, allowing you to track state for each folder independently.

上述解决方案具有通用模块,它们包装资源以提供事物的通用接口(例如,根据某些输入变量以特定方式标记我们的EC2实例,并为其提供私有Route53记录),然后是实现的组件。

The above solution has generic modules that wrap resources to provide a common interface to things (for example our EC2 instances are tagged in a specific way depending on some input variables and also given a private Route53 record) and then "implemented components".

这些组件包含一堆模块/资源,这些模块/资源将由Terraform在同一文件夹中应用。因此,我们可以将ELB,一些应用程序服务器和数据库放在 application.tf 下,然后将其符号链接到一个位置,这使我们可以在一个地方使用Terraform进行控制。如果我们在某个位置的资源可能存在一些差异,则将它们分开。在上面的示例中,您可以看到 staging / global global_component2.tf 在生产中不存在。这可能仅适用于非生产环境,例如某些网络控制以防止Internet访问环境。

These components contain a bunch of modules/resources that would be applied by Terraform at the same folder. So we might put an ELB, some application servers and a database under application.tf and then symlinking that into a location gives us a single place to control with Terraform. Where we might have some differences in resources for a location then they would be separated off. In the above example you can see that staging/global has a global_component2.tf that isn't present in production. This might be something that is only applied in the non production environments such as some network control to prevent internet access to the environment.

这里的真正好处是,一切都很容易可以直接在开发人员的源代码控制中查看,而无需执行模板步骤即可生成所需的Terraform代码。

The real benefit here is that everything is easily viewable in source control for developers directly rather than having a templating step that produces the Terraform code you want.

它还有助于遵循DRY,其中环境之间唯一真正的区别是在这些位置的 terraform.tfvars 文件中,由于每个文件夹与另一个文件夹几乎相同,因此更容易在更改生效之前对其进行测试。

It also helps follow DRY where the only real differences between the environments are in the terraform.tfvars files in the locations and makes it easier to test changes before putting them live as each folder is pretty much the same as the other.

这篇关于Terraform:用于多租户的状态管理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆