-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_msk_cluster updated when referencing an aws_msk_configuration causes cluster delete/recreate #8953
Comments
Hi again @cdenneen 👋 In this case, the plan difference is showing the
Can you confirm that this configuration was not changed? Thanks. |
Yes this wasn’t changed |
Thanks for the confirmation. I was able to reproduce this issue using an This behavior is not observed in This will require either confirmation from the AWS MSK team that this is expected behavior (in which we can remove the our default expectation of To workaround this behavior for now in resource "aws_msk_cluster" "kafka" {
# ... other configuration ...
encryption_info {
encryption_at_rest_kms_key_arn = "${aws_kms_key.kms.arn}"
encryption_in_transit {
client_broker = "TLS" # or TLS_PLAINTEXT or PLAINTEXT
}
}
} Full reproduction configuration used for testing: terraform {
required_providers {
aws = "2.14.0"
}
required_version = "0.12.1"
}
provider "aws" {
region = "us-east-1"
}
resource "aws_vpc" "vpc" {
cidr_block = "192.168.0.0/22"
tags = {
Name = "tf-testacc-msk-cluster-vpc"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "subnet_az1" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "192.168.0.0/24"
availability_zone = "${data.aws_availability_zones.available.names[0]}"
tags = {
Name = "tf-testacc-msk-cluster-subnet-az1"
}
}
resource "aws_subnet" "subnet_az2" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "192.168.1.0/24"
availability_zone = "${data.aws_availability_zones.available.names[1]}"
tags = {
Name = "tf-testacc-msk-cluster-subnet-az2"
}
}
resource "aws_subnet" "subnet_az3" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "192.168.2.0/24"
availability_zone = "${data.aws_availability_zones.available.names[2]}"
tags = {
Name = "tf-testacc-msk-cluster-subnet-az3"
}
}
resource "aws_security_group" "sg" {
vpc_id = "${aws_vpc.vpc.id}"
}
resource "aws_msk_configuration" "config1" {
kafka_versions = ["2.1.0"]
name = "mskconfig"
server_properties = <<PROPERTIES
auto.create.topics.enable = true
delete.topic.enable = true
log.retention.ms = 259200000
PROPERTIES
}
resource "aws_kms_key" "kms" {
description = "msk kms key"
}
resource "aws_kms_alias" "a" {
name = "alias/msk-key"
target_key_id = "${aws_kms_key.kms.key_id}"
}
resource "aws_msk_cluster" "kafka" {
depends_on = [aws_msk_configuration.config1]
cluster_name = "test-kafka-cluster"
kafka_version = "2.1.0"
number_of_broker_nodes = 3
broker_node_group_info {
instance_type = "kafka.m5.large"
client_subnets = [
"${aws_subnet.subnet_az1.id}",
"${aws_subnet.subnet_az2.id}",
"${aws_subnet.subnet_az3.id}",
]
security_groups = ["${aws_security_group.sg.id}"]
ebs_volume_size = 1000
}
configuration_info {
arn = "${aws_msk_configuration.config1.arn}"
revision = "${aws_msk_configuration.config1.latest_revision}"
}
encryption_info {
encryption_at_rest_kms_key_arn = "${aws_kms_key.kms.arn}"
}
} |
@bflad thanks I've raised this to the account team. With the configurations not being able to be deleted I would be stuck again. Curious with your CI/CD are you just enumerating thousands of these Also noticed issue with the import resulting in from what I believe to be the missing revision. I mentioned this in #8898. |
Is the expectation that you would put a UUID or something on the configuration name since it can't be deleted? |
Yep. We'll likely hit an account limit soon until the deletion API is available. 😄
The solution to this problem will be environment specific for your needs. You have a few options:
|
There appears to be a similar problem if you don't specify configuration_info, besides showing changes to encryption_info, plan also shows changes to configuration_info:
|
Seems like I'm hitting this bug too :
with no change, each apply results in :
Terraform 0.12.8, on eu-west-1 region. Problem seems however fixed when passing a reference to a single key :
|
Hi folks 👋 For the original issue report with the Separately but mentioned above, it is also worth mentioning that the MSK API did just release If you are still running into other issues with the |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Doing secondary
terraform apply
results in terraform apply wanting to destroy cluster and create a new cluster when usingaws_msk_configuration
forconfiguration_info
:The text was updated successfully, but these errors were encountered: