Terraform plan showing differences after importing resources from deleted state - import

The infrastructure was built into AWS with Terraform source code. The state files are gone and now i'm trying to import the existing infrastructure into Terraform, rebuilding the state and syncing with the source code.
Any resource that i run terraform import, the import process has no errors. But when i run terraform plan (without doing any modifications, just after import), Terraforms shows that need to modify or even destroy resources. I used terraform refresh, checked all the IDs and resources names/ARNs but the same result.
For example, i have a Security Group with the sg-12345678910111213 ID. This resource need to be imported, so i used the command below:
terraform import -var-file=secrets.tfvars aws_security_group.sg-rds sg-12345678910111213
aws_security_group.sg-rds: Importing from ID "sg-12345678910111213"...
aws_security_group.sg-rds: Import prepared!
Prepared aws_security_group for import
aws_security_group.sg-rds: Refreshing state... [id=sg-12345678910111213]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
When I run terraform plan -var-file=secrets.tfvars, I have the following output:
# aws_security_group.sg-rds will be updated in-place
~ resource "aws_security_group" "sg-rds" {
id = "sg-12345678910111213"
~ ingress = [
- {
- cidr_blocks = [
- "10.123.0.40/32",
]
- description = ""
- from_port = 3306
- ipv6_cidr_blocks = []
- prefix_list_ids = []
- protocol = "tcp"
- security_groups = [
- "sg-12345678910111213",
]
- self = false
- to_port = 3306
},
+ {
+ cidr_blocks = [
+ "10.123.0.40/32",
]
+ description = ""
+ from_port = 3306
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 3306
},
+ {
+ cidr_blocks = []
+ description = ""
+ from_port = 3306
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = [
+ "sg-12345678910111213",
]
+ self = false
+ to_port = 3306
},
]
name = "SG_RDS"
+ revoke_rules_on_delete = false
tags = {
"Name" = "SG_RDS"
}
# (5 unchanged attributes hidden)
# (1 unchanged block hidden)
}
This is my security group resource source code:
resource "aws_security_group" "sg-rds" {
name = "SG_RDS"
description = "Allows incoming database connections"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = [aws_security_group.sg-ec2.id]
}
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["10.123.0.40/32"]
}
tags = {
Name = "SG_RDS"
}
}
The rules in the existing SG:
Rules in the AWS control panel
The source code has not changed to have drifts in the configuration (the diff apparently shows that) and this happens with all the resources that i imported.
I cannot destroy/change any resource without impacting negatively on the project.
This is my current terraform version and providers:
Terraform v0.14.5
provider registry.terraform.io/hashicorp/aws v3.26.0
provider registry.terraform.io/hashicorp/random v3.0.1
provider registry.terraform.io/hashicorp/tls v3.0.0

I also encountered the same issue, the solution for me was to remove the resource from the state with "terraform state rm resource_name.example azure-id".
After that I changed the name of the resource to resource_name.example2 in my TF configuration and imported it with "terraform import resource_name.example2 azure-id".
Than I repeated the same operation and removed "resource_name.example2" changed my TF configuration back to "resource_name.example" and ran "terraform import resource_name.example azure-id", and it worked! I'm assuming it's a bug because I didnt change any of the resource configuration. I just removed it and imported it again to the state.

Related

How to reach PostgresCluster in Azure Cloud with a client from local?

I want to access my postgres cluster within my kubernetes within azure cloud with a client (e.g. pgadmin) to search manuelly through data.
At the moment my complete cluster only has 1 ingress that is pointing to a self written api gateway.
I found a few ideas online and tried to add a load balancer in kubernetesd without success.
My postgress cluster in terraform:
resource "helm_release" "postgres-cluster" {
name = "postgres-cluster"
repository = "https://charts.bitnami.com/bitnami"
chart = "postgresql-ha"
namespace = var.kube_namespace
set {
name = "global.postgresql.username"
value = var.postgresql_username
}
set {
name = "global.postgresql.password"
value = var.postgresql_password
}
}
Results in a running cluster:
Now my try to add a load balancer:
resource "kubernetes_manifest" "postgresql-loadbalancer" {
manifest = {
"apiVersion" = "v1"
"kind" = "Service"
"metadata" = {
"name" = "postgres-db-lb"
"namespace" = "${var.kube_namespace}"
}
"spec" = {
"selector" = {
"app.kubernetes.io/name" = "postgresql-ha"
}
"type" = "LoadBalancer"
"ports" = [{
"port" = "5432"
"targetPort" = "5432"
}]
}
}
}
Will result in:
But still no success if I try to connect to the external IP and Port:
Found the answer - it was an internal Firewall I was never thinking of. The code is absolutly correct, a loadbalancer do work here.

ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner

I read through the karpenter document at https://karpenter.sh/v0.16.1/getting-started/getting-started-with-terraform/#install-karpenter-helm-chart. I followed instructions step by step. I got errors at the end.
kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z
ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
Below is the source code:
cat main.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.5"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
}
}
provider "aws" {
region = "us-east-1"
}
locals {
cluster_name = "karpenter-demo"
# Used to determine correct partition (i.e. - `aws`, `aws-gov`, `aws-cn`, etc.)
partition = data.aws_partition.current.partition
}
data "aws_partition" "current" {}
module "vpc" {
# https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest
source = "terraform-aws-modules/vpc/aws"
version = "3.14.4"
name = local.cluster_name
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
# https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest
source = "terraform-aws-modules/eks/aws"
version = "18.29.0"
cluster_name = local.cluster_name
cluster_version = "1.22"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
# Required for Karpenter role below
enable_irsa = true
node_security_group_additional_rules = {
ingress_nodes_karpenter_port = {
description = "Cluster API to Node group for Karpenter webhook"
protocol = "tcp"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
node_security_group_tags = {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
# Only need one node to get Karpenter up and running.
# This ensures core services such as VPC CNI, CoreDNS, etc. are up and running
# so that Karpenter can be deployed and start managing compute capacity as required
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]
# Not required nor used - avoid tagging two security groups with same tag as well
create_security_group = false
min_size = 1
max_size = 1
desired_size = 1
iam_role_additional_policies = [
"arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore", # Required by Karpenter
"arn:${local.partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy",
"arn:${local.partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", #for access to ECR images
"arn:${local.partition}:iam::aws:policy/CloudWatchAgentServerPolicy"
]
tags = {
# This will tag the launch template created for use by Karpenter
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
}
}
}
#The EKS module creates an IAM role for the EKS managed node group nodes. We’ll use that for Karpenter.
#We need to create an instance profile we can reference.
#Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster.
resource "aws_iam_instance_profile" "karpenter" {
name = "KarpenterNodeInstanceProfile-${local.cluster_name}"
role = module.eks.eks_managed_node_groups["initial"].iam_role_name
}
#Create the KarpenterController IAM Role
#Karpenter requires permissions like launching instances, which means it needs an IAM role that grants it access. The config
#below will create an AWS IAM Role, attach a policy, and authorize the Service Account to assume the role using IRSA. We will
#create the ServiceAccount and connect it to this role during the Helm chart install.
module "karpenter_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "5.3.3"
role_name = "karpenter-controller-${local.cluster_name}"
attach_karpenter_controller_policy = true
karpenter_tag_key = "karpenter.sh/discovery/${local.cluster_name}"
karpenter_controller_cluster_id = module.eks.cluster_id
karpenter_controller_node_iam_role_arns = [
module.eks.eks_managed_node_groups["initial"].iam_role_arn
]
oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["karpenter:karpenter"]
}
}
}
#Install Karpenter Helm Chart
#Use helm to deploy Karpenter to the cluster. We are going to use the helm_release Terraform resource to do the deploy and pass in the
#cluster details and IAM role Karpenter needs to assume.
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", local.cluster_name]
}
}
}
resource "helm_release" "karpenter" {
namespace = "karpenter"
create_namespace = true
name = "karpenter"
repository = "https://charts.karpenter.sh"
chart = "karpenter"
version = "v0.16.1"
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = module.karpenter_irsa.iam_role_arn
}
set {
name = "clusterName"
value = module.eks.cluster_id
}
set {
name = "clusterEndpoint"
value = module.eks.cluster_endpoint
}
set {
name = "aws.defaultInstanceProfile"
value = aws_iam_instance_profile.karpenter.name
}
}
#Provisioner
#Create a default provisioner using the command below. This provisioner configures instances to connect to your cluster’s endpoint and
#discovers resources like subnets and security groups using the cluster’s name.
#This provisioner will create capacity as long as the sum of all created capacity is less than the specified limit.
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}
resource "kubectl_manifest" "karpenter_provisioner" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
limits:
resources:
cpu: 1000
provider:
subnetSelector:
Name: "*private*"
securityGroupSelector:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
tags:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
ttlSecondsAfterEmpty: 30
YAML
depends_on = [
helm_release.karpenter
]
}
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
resources:
requests:
cpu: 1
EOF
kubectl scale deployment inflate --replicas 5
kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z
ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
I belive this is due to the pod topology defined in the Karpenter deployment here:
https://github.com/aws/karpenter/blob/main/charts/karpenter/values.yaml#L73-L77
, you can read further on what pod topologySpreadConstraints does here:
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
If you increase the desired_size to 2 which matches the default deployment replicas above, that should resove the error.

How to inject mapRoles into an EKS aws-auth ConfigMap via Terraform?

I want to automate inserting the ARNs of specific roles into an EKS aws-auth ConfigMap, right after deploying the cluster. However, it seems that Terraform is recommending using kubectl instead.
I have tried the following method but I'm getting an error that the data block is not expecting here.
data "aws_eks_cluster_auth" "cluster_auth" {
name = "my_cluster"
}
provider "kubernetes" {
host = aws_eks_cluster.my_cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.my_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster_auth.token
}
resource "kubernetes_config_map" "aws_auth_configmap" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = <<YAML
- rolearn: arn:aws:iam::111111111111:role/MyRole
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111111111111:role/MyRole
username: kubectl
groups:
- system:masters
YAML
}
}
you can give the aidanmelen/eks-auth/aws a try ;)
module "eks" {
source = "terraform-aws-modules/eks/aws"
# insert the 15 required variables here
}
module "eks_auth" {
source = "aidanmelen/eks-auth/aws"
eks = module.eks
map_roles = [
{
rolearn = "arn:aws:iam::66666666666:role/role1"
username = "role1"
groups = ["system:masters"]
},
]
map_users = [
{
userarn = "arn:aws:iam::66666666666:user/user1"
username = "user1"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::66666666666:user/user2"
username = "user2"
groups = ["system:masters"]
},
]
map_accounts = [
"777777777777",
"888888888888",
]
}

Terraform: retrieve the nginx ingress controller Load Balancer IP

I'm trying to get the nginx ingress controller load balancer ip in Azure AKS. I figured I would use the kubernetes provider via:
data "kubernetes_service" "nginx_service" {
metadata {
name = "${local.ingress_name}-ingress-nginx-controller"
namespace = local.ingress_ns
}
depends_on = [helm_release.ingress]
}
However, i'm not seeing the IP address, this is what i get back:
nginx_service = [
+ {
+ cluster_ip = "10.0.165.249"
+ external_ips = []
+ external_name = ""
+ external_traffic_policy = "Local"
+ health_check_node_port = 31089
+ load_balancer_ip = ""
+ load_balancer_source_ranges = []
+ port = [
+ {
+ name = "http"
+ node_port = 30784
+ port = 80
+ protocol = "TCP"
+ target_port = "http"
},
+ {
+ name = "https"
+ node_port = 32337
+ port = 443
+ protocol = "TCP"
+ target_port = "https"
},
]
+ publish_not_ready_addresses = false
+ selector = {
+ "app.kubernetes.io/component" = "controller"
+ "app.kubernetes.io/instance" = "nginx-ingress-internal"
+ "app.kubernetes.io/name" = "ingress-nginx"
}
+ session_affinity = "None"
+ type = "LoadBalancer"
},
]
However when I pull down the service via kubectl I can get the IP address via:
kubectl get svc nginx-ingress-internal-ingress-nginx-controller -n nginx-ingress -o json | jq -r '.status.loadBalancer.ingress[].ip'
10.141.100.158
Is this a limitation of kubernetes provider for AKS? If so, what is a workaround other people have used? My end goals is to use the IP to configure the application gateway backend.
I guess I can use local-exec, but that seem hacky. Howerver, this might be my only option at the moment.
Thanks,
Jerry
although i strongly advise against creating resources inside Kubernetes with Terraform, you can do that:
Create a Public IP with Terraform -> Create the ingress-nginx inside Kubernetes with Terraform and pass annotations and loadBalancerIPwith data from your Terraform resources. The final manifest should look like this:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
Terraform could look like this:
resource "kubernetes_service" "ingress_nginx" {
metadata {
name = "tingress-nginx-controller"
annotations {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "${azurerm_resource_group.YOUR_RG.name}"
}
spec {
selector = {
app = <PLACEHOLDER>
}
port {
port = <PLACEHOLDER>
target_port = <PLACEHOLDER>
}
type = "LoadBalancer"
load_balancer_ip = "${azurerm_public_ip.YOUR_IP.ip_address}"
}
}
Unfortunately, this is for internal ingress and not public facing and the IP is allocated dynamically. We currently dont want to use static ips
This is what I came up with:
module "load_balancer_ip" {
count = local.create_ingress ? 1 : 0
source = "github.com/matti/terraform-shell-resource?ref=v1.5.0"
command = "./scripts/get_load_balancer_ip.sh"
environment = {
KUBECONFIG = base64encode(module.aks.kube_admin_config_raw)
}
depends_on = [local_file.load_balancer_ip_script]
}
resource "local_file" "load_balancer_ip_script" {
count = local.create_ingress ? 1 : 0
filename = "./scripts/get_load_balancer_ip.sh"
content = <<-EOT
#!/bin/bash
echo $KUBECONFIG | base64 --decode > kubeconfig
kubectl get svc -n ${local.ingress_ns} ${local.ingress_name}-ingress-nginx-controller --kubeconfig kubeconfig -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'
rm -f kubeconfig 2>&1 >/dev/null
EOT
}
output nginx_ip {
description = "IP address of the internal nginx controller"
value = local.create_ingress ? module.load_balancer_ip[0].content : null
}

Terraform to prevent forced updated of AWS EKS cluster

I am using terraform aws eks registry module
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/12.1.0?tab=inputs
Today with a new change to TF configs (unrelated to EKS) I saw that my EKS worker nodes are going to be rebuilt due to AMI updates which I am trying to prevent.
# module.kubernetes.module.eks-cluster.aws_launch_configuration.workers[0] must be replaced
+/- resource "aws_launch_configuration" "workers" {
~ arn = "arn:aws:autoscaling:us-east-2:555065427312:launchConfiguration:6c59fac6-5912-4079-8cc9-268a7f7fc98b:launchConfigurationName/edna-dev-eks-02020061119383942580000000b" -> (known after apply)
associate_public_ip_address = false
ebs_optimized = true
enable_monitoring = true
iam_instance_profile = "edna-dev-eks20200611193836418800000007"
~ id = "edna-dev-eks-02020061119383942580000000b" -> (known after apply)
~ image_id = "ami-05fc7ae9bc84e5708" -> "ami-073f227b0cd9507f9" # forces replacement
instance_type = "t3.medium"
+ key_name = (known after apply)
~ name = "edna-dev-eks-02020061119383942580000000b" -> (known after apply)
name_prefix = "edna-dev-eks-0"
security_groups = [
"sg-09b14dfce82015a63",
]
The rebuild happens because EKS got updated version of the AMI for worker nodes of the cluster.
This is my EKS terraform config
###################################################################################
# EKS CLUSTER #
# #
# This module contains configuration for EKS cluster running various applications #
###################################################################################
module "eks_label" {
source = "git::https://github.com/cloudposse/terraform-null-label.git?ref=master"
namespace = var.project
environment = var.environment
attributes = [var.component]
name = "eks"
}
data "aws_eks_cluster" "cluster" {
name = module.eks-cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-cluster.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.9"
}
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
cluster_version = "1.16"
subnets = var.subnets
vpc_id = var.vpc_id
worker_groups = [
{
instance_type = var.cluster_node_type
asg_max_size = var.cluster_node_count
}
]
tags = var.tags
}
If I am trying to add lifecycle block in the module config
lifecycle {
ignore_changes = [image_id]
}
I get error:
➜ terraform plan
Error: Reserved block type name in module block
on modules/kubernetes/main.tf line 45, in module "eks-cluster":
45: lifecycle {
The block type name "lifecycle" is reserved for use by Terraform in a future
version.
Any ideas?
What about trying to use the worker_ami_name_filter variable for terraform-aws-modules/eks/aws to specifically find only your current AMI?
For example:
module "eks-cluster" {
source = "terraform-aws-modules/eks/aws"
cluster_name = module.eks_label.id
<...snip...>
worker_ami_name_filter = "amazon-eks-node-1.16-v20200531"
}
You can use AWS web console or cli to map the AMI IDs to their names:
user#localhost:~$ aws ec2 describe-images --filters "Name=name,Values=amazon-eks-node-1.16*" --region us-east-2 --output json | jq '.Images[] | "\(.Name) \(.ImageId)"'
"amazon-eks-node-1.16-v20200423 ami-01782c0e32657accf"
"amazon-eks-node-1.16-v20200531 ami-05fc7ae9bc84e5708"
"amazon-eks-node-1.16-v20200609 ami-073f227b0cd9507f9"
"amazon-eks-node-1.16-v20200507 ami-0edc51bc2f03c9dc2"
But why are you trying to prevent the Auto Scaling Group from using a newer AMI? It will only apply the newer AMI to new nodes. It won't terminate existing nodes just to update them.