在通用数据过滤器上使用 terraform_remote_state [英] Using terraform_remote_state over common data filters

查看:16
本文介绍了在通用数据过滤器上使用 terraform_remote_state的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想了解何时建议使用 terraform_remote_state 而不是常见的数据过滤方法.

I would like to understand when it is recommended to use terraform_remote_state over common data filter approaches.

我看到像图像这样的情况,它们不受另一个 terraform 状态管理,在这种情况下,显而易见的(也是唯一的)选择是数据过滤器.但是,在大多数情况下,我可以在 terraform_remote_state 和其他数据过滤器之间进行选择.我找不到关于这个问题的官方建议.

I see cases like images, which are not managed by another terraform state in which case the obvious (and only) choice are data filters. However, in most cases I could choose between terraform_remote_state and other data filters. I could not find an official recommendation on that matter.

举个例子(以下代码未按原样运行,已简化为仅显示主要思想):

Let's take an example (The following code does not run as is and is simplified to only show the main idea):

假设我们有一个具有自己的状态/工作区的中心组件

Let us assume we have a central component with its own state/workspace

保险库/main.tf:

terraform {
  backend "azurerm" {
    storage_account_name = "tfstates"
    container_name       = "tfstates"
    key                  = "vault/all.tfstate"
  }
}

provider "openstack" {
  version     = "1.19"
  cloud       = "openstack"
}

resource "openstack_networking_subnetpool_v2" "vault" {
  name              = "vault"
  prefixes          = ["10.1.0.0/16"]
  min_prefixlen     = 24
  default_prefixlen = 24
}

resource "openstack_networking_network_v2" "vault" {
  name           = "vault"
}

resource "openstack_networking_subnet_v2" "vault" {
  name            = "vault"
  network_id      = openstack_networking_network_v2.vault.id
  subnetpool_id   = openstack_networking_subnetpool_v2.vault.id
}

// Make cidr available for terraform_remote_state approach
output "cidr" {
  value = openstack_networking_subnet_v2.vault.cidr
}

....

<小时>

选项 1:使用数据过滤器将另一个 tf 工作区中的 vault cidr 列入白名单

postgres/main.tf:

terraform {
  backend "azurerm" {
    storage_account_name = "tfstates"
    container_name       = "tfstates"
    key                  = "postgres/all.tfstate"
  }
}

provider "openstack" {
  version     = "1.19"
  cloud       = "openstack"
}

data "openstack_identity_project_v3" "vault" {
  // assuming vault is setup in its own project
  name = "vault"
}

data "openstack_networking_network_v2" "vault" {
  name      = "vault"
  tenant_id = data.openstack_identity_project_v3.vault.id
}

data "openstack_networking_subnet_v2" "vault" {
  name      = "vault"
  tenant_id = data.openstack_identity_project_v3.vault.id
}

resource "openstack_networking_secgroup_v2" "postgres" {
  name        = "postgres"
  description = "Allow vault connection"
}

resource "openstack_networking_secgroup_rule_v2" "allow-vault" {
  direction         = "ingress"
  ethertype         = "IPv4"
  security_group_id = openstack_networking_secgroup_v2.postgres.id
  remote_ip_prefix  = data.openstack_networking_subnet_v2.vault.cidr 
}

<小时>

选项 2:使用 terraform_remote_state 将另一个 tf 工作区中的 vault cidr 列入白名单

postgres/main.tf:

terraform {
  backend "azurerm" {
    storage_account_name = "tfstates"
    container_name       = "tfstates"
    key                  = "postgres/all.tfstate"
  }
}

provider "openstack" {
  version     = "1.19"
  cloud       = "openstack"
}

data "terraform_remote_state" "vault" {
  backend "azurerm" {
    storage_account_name = "tfstates"
    container_name       = "tfstates"
    key                  = "vault/all.tfstate"
  }
}

resource "openstack_networking_secgroup_v2" "postgres" {
  name        = "postgres"
  description = "Allow vault connection"
}

resource "openstack_networking_secgroup_rule_v2" "allow-vault" {
  direction         = "ingress"
  ethertype         = "IPv4"
  security_group_id = openstack_networking_secgroup_v2.postgres.id
  remote_ip_prefix  = data.terraform_remote_state.vault.cidr 
}

就个人而言,我更喜欢 terraform_remote_state,因为从模块的角度来看,它感觉不那么模棱两可并且更具声明性(即,您有意识地声明应该由其他工作区使用的输出变量).但是,如果有充分的理由反对它,或者是否有一些我不知道的最佳做法,我很感兴趣.

Personally, I prefer terraform_remote_state because it feels less ambiguous and more declarative from a module perspective (i.e., you consciously declare output variables that should be used by other workspaces). However, I'm interested if there are solid reasons against it or if there are some best practices I'm not aware of.

这样的场景有官方推荐的方法吗?

Is there an officially recommended way for scenarios like that?

推荐答案

@martin-atkins 给了一个很棒的 answer 在 HashiCorp 论坛中.

@martin-atkins gave a great answer in the HashiCorp discussion forum.

但是,他没有对这个帖子做出回应,所以为了完成,我用我自己的话来总结一下要点.

However, he did not respond on this thread, so I am summarizing with my own words the gist for the sake of completion.

适当的 HashiCorp 方式将是第三种选择:将参数写入配置存储,例如 Consul.

The adequate HashiCorp way would be a third option: write parameters to a configuration store such as Consul.

这有很多好处:

  1. 您可以控制向其他工具公开的内容(问题选项 2 的好处:显式发布)
  2. Terraform 工作空间之间没有紧密耦合(问题选项 1 的好处:解耦).即,变量生产者与变量消费者分离.
  3. 其他工具(包括 Terraform 本身)稍后可以使用这些值.

<小时>

我建议阅读 原始答案,因为它更深入.

这篇关于在通用数据过滤器上使用 terraform_remote_state的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆