Terraform:多租户的状态管理 [英] Terraform: state management for multi-tenancy

查看:40
本文介绍了Terraform:多租户的状态管理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

由于我们正在评估 Terraform 以替换(部分)我们的多租户 SaaS 的 Ansible 配置流程,我们意识到 Terraform 的便利性、性能和可靠性,因为我们可以处理基础架构更改(添加/删除)顺利,跟踪基础状态(这非常酷).

As we're in progress of evaluating Terraform to replace (partially) our Ansible provisioning process for a multi-tenancy SaaS, we realize the convenience, performance and reliability of Terraform as we can handle the infrastructure change (adding/removing) smoothly, keeping track of infra state (that's very cool).

我们的应用程序是一个多租户 SaaS,我们为客户提供单独的实例 - 在 Ansible 中,我们有自己的动态库存(与 EC2 动态库存完全相同).我们阅读了大量 Terraform 书籍/教程和最佳实践,其中许多人建议应该单独管理多环境状态远程在 Terraform 中,但它们都看起来像静态环境(如 Dev/Staging/Prod).

Our application is a multi-tenancy SaaS which we provision separate instances for our customers - in Ansible we have our own dynamic inventory (quite the same as EC2 dynamic inventory). We go through lots of Terraform books/tutorials and best practices where many suggest that multi environment states should be managed separately & remotely in Terraform, but all of them look like static env (like Dev/Staging/Prod).

是否有任何最佳实践或真实示例来管理多租户应用程序的动态状态清单?我们希望跟踪每个客户实例集的状态 - 轻松填充对它们的更改.

Is there any best practice or real example of managing dynamic inventory of states for multi-tenancy apps? We would like to track state of each customer set of instances - populate changes to them easily.

一种方法可能是我们为每个客户创建一个目录并将 *.tf 脚本放入其中,该脚本将调用我们托管在全球某处的模块.状态文件可能会被放入 S3,这样我们就可以在需要时为每个客户填充更改.

One approach might be we create a directory for each customer and place *.tf scripts inside, which will call to our module hosted somewhere global. State files might be put to S3, this way we can populate changes to each individual customer if needed.

推荐答案

Terraform 在文件夹级别工作,提取所有 .tf 文件(默认情况下为 terraform.tfvars代码>文件).

Terraform works on a folder level, pulling in all .tf files (and by default a terraform.tfvars file).

所以我们做了一些类似于Antonanswer 但消除了使用 sed 模板化事物的一些复杂性.因此,作为一个基本示例,您的结构可能如下所示:

So we do something similar to Anton's answer but do away with some complexity around templating things with sed. So as a basic example your structure might look like this:

$ tree -a --dirsfirst
.
├── components
│   ├── application.tf
│   ├── common.tf
│   ├── global_component1.tf
│   └── global_component2.tf
├── modules
│   ├── module1
│   ├── module2
│   └── module3
├── production
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│       ├── common.tf -> ../../components/common.tf
│       ├── global_component1.tf -> ../../components/global_component1.tf
│       ├── global_component2.tf -> ../../components/global_component2.tf
│       └── terraform.tfvars
├── staging
│   ├── customer1
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   ├── customer2
│   │   ├── application.tf -> ../../components/application.tf
│   │   ├── common.tf -> ../../components/common.tf
│   │   └── terraform.tfvars
│   └── global
│       ├── common.tf -> ../../components/common.tf
│       ├── global_component1.tf -> ../../components/global_component1.tf
│       └── terraform.tfvars
├── apply.sh
├── destroy.sh
├── plan.sh
└── remote.sh

在这里,您从根级别运行您的计划/应用/销毁,其中包装器 shell 脚本处理诸如 cd 进入目录和运行 terraform get -update=true 但也运行 terraform init 用于文件夹,以便您获得 S3 的唯一状态文件密钥,允许您独立跟踪每个文件夹的状态.

Here you run your plan/apply/destroy from the root level where the wrapper shell scripts handle things like cd'ing into the directory and running terraform get -update=true but also running terraform init for the folder so you get a unique state file key for S3, allowing you to track state for each folder independently.

上述解决方案具有通用模块,这些模块包装资源以提供事物的通用接口(例如,我们的 EC2 实例根据某些输入变量以特定方式标记,并且还提供私有 Route53 记录),然后实现组件".

The above solution has generic modules that wrap resources to provide a common interface to things (for example our EC2 instances are tagged in a specific way depending on some input variables and also given a private Route53 record) and then "implemented components".

这些组件包含一堆模块/资源,这些模块/资源将由 Terraform 在同一文件夹中应用.因此,我们可能会在 application.tf 下放置一个 ELB、一些应用程序服务器和一个数据库,然后将其符号链接到一个位置,这样我们就可以在一个地方使用 Terraform 进行控制.如果我们可能在某个位置的资源上存在一些差异,那么它们就会被分开.在上面的示例中,您可以看到 staging/global 有一个在生产中不存在的 global_component2.tf.这可能仅适用于非生产环境,例如某些网络控制以防止 Internet 访问环境.

These components contain a bunch of modules/resources that would be applied by Terraform at the same folder. So we might put an ELB, some application servers and a database under application.tf and then symlinking that into a location gives us a single place to control with Terraform. Where we might have some differences in resources for a location then they would be separated off. In the above example you can see that staging/global has a global_component2.tf that isn't present in production. This might be something that is only applied in the non production environments such as some network control to prevent internet access to the environment.

这里真正的好处是,开发人员可以直接在源代码管理中轻松查看所有内容,而不是通过模板步骤来生成所需的 Terraform 代码.

The real benefit here is that everything is easily viewable in source control for developers directly rather than having a templating step that produces the Terraform code you want.

它还有助于遵循 DRY,其中环境之间唯一真正的区别在于位置中的 terraform.tfvars 文件,并且在将更改生效之前更容易测试更改,因为每个文件夹都差不多和另一个一样.

It also helps follow DRY where the only real differences between the environments are in the terraform.tfvars files in the locations and makes it easier to test changes before putting them live as each folder is pretty much the same as the other.

这篇关于Terraform:多租户的状态管理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆