更改计数时的 Terraform 循环 [英] Terraform cycle when altering a count
问题描述
我有一些资源,其 count
由变量参数化.这用于创建 VM 资源以及 null_resource
,例如在它们上运行部署脚本.当我将计数值从 2 减少到 1 并应用时,出现错误.
I have some resources whose count
is parameterised by a variable. This is used to create VM resources as well as null_resource
s for e.g. running deployment scripts on them. When I reduce the value of the count from 2 to 1 and apply, I get an error.
Terraform 毫无怨言地执行 plan
.但是当我apply
时,它告诉我有一个循环:
Terraform executes plan
with no complaints. But when I apply
, it tells me there is a cycle:
错误:循环:null_resource.network_connection_configuration[7](销毁)、null_resource.network_connection_configuration[8](销毁)、null_resource.network_connection_configuration[3](销毁)、null_resource.network_connection_configuration[4](销毁)、null_resource.network_connection_configuration[0](销毁),null_resource.network_connection_configuration[6](销毁),null_resource.network_connection_configuration[1](销毁),null_resource.network_connection_configuration[9](销毁),null_resource.network_connection_configuration[2](销毁),null_resource.network_connection_configuration[10](销毁)、hcloud_server.kafka[2](销毁)、local.all_machine_ips、null_resource.network_connection_configuration(准备状态)、null_resource.network_connection_configuration[5](销毁)
Error: Cycle: null_resource.network_connection_configuration[7] (destroy), null_resource.network_connection_configuration[8] (destroy), null_resource.network_connection_configuration[3] (destroy), null_resource.network_connection_configuration[4] (destroy), null_resource.network_connection_configuration[0] (destroy), null_resource.network_connection_configuration[6] (destroy), null_resource.network_connection_configuration[1] (destroy), null_resource.network_connection_configuration[9] (destroy), null_resource.network_connection_configuration[2] (destroy), null_resource.network_connection_configuration[10] (destroy), hcloud_server.kafka[2] (destroy), local.all_machine_ips, null_resource.network_connection_configuration (prepare state), null_resource.network_connection_configuration[5] (destroy)
这是文件的相关部分:
variable kafka_count {
default = 3
}
resource "hcloud_server" "kafka" {
count = "${var.kafka_count}"
name = "kafka-${count.index}"
image = "ubuntu-18.04"
server_type = "cx21"
}
locals {
all_machine_ips = "${hcloud_server.kafka.*.ipv4_address)}"
}
resource "null_resource" "network_connection_configuration" {
count = "${length(local.all_machine_ips)}"
triggers = {
ips = "${join(",", local.all_machine_ips)}"
}
depends_on = [
"hcloud_server.kafka"
]
connection {
type = "ssh"
user = "deploy"
host = "${element(local.all_machine_ips, count.index)}"
port = 22
}
// ... some file provisioners
}
当我尝试使用可视化找到循环时:
When I try to find the cycle using the visualisation:
terraform graph -verbose -draw-cycles
没有可见的循环.
当我使用 TF_LOG=1
时,调试日志不显示任何错误
When I use TF_LOG=1
the debug log doesn't show any errors
所以问题是我可以增加计数但不能减少它.我不想手动破解文件,因为这意味着我将来无法缩小!我正在使用 Terraform v0.12.1.
So the issue is that I can increase the count but not decrease it. I don't want to manally hack the file as it means I won't be able to scale down in future! I'm using Terraform v0.12.1.
有什么策略可以调试这种情况吗?
Are there any strategies for debugging this situation?
推荐答案
我在 0.12.x 中遇到了类似的问题 - 我在 aws_instance
资源中调用了一个配置程序,它给出了同样的错误你在增加资源的数量时有.
I had a similar issue with 0.12.x - I was calling a provisioner within an aws_instance
resource, which was giving the same error you had when increasing the count for the resource.
我通过使用 self 对象 (self.private_ip
) 来引用资源而不是使用 count.index 或 element().
I got around it by using the self object (self.private_ip
) to reference the resource rather than using count.index or element().
这篇关于更改计数时的 Terraform 循环的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!