IBM Cloud and Terraform: How to identify keyring in ibm_iam_authorization_policy? - ibm-cloud

I am using Terraform with IBM Cloud and wanted to create a service to service authorization with ibm_iam_authorization_policy.
I know how to create the policy between cloud-object-storage and kms in general. But how do I scope it to a specific key ring? I can do it in the IBM Cloud console, but haven't seen anything in the provider.
resource "ibm_iam_authorization_policy" "testpolicy" {
source_resource_instance_id = data.ibm_resource_instance.cos_resource_instance.guid
source_service_name = "cloud-object-storage"
target_resource_instance_id = data.ibm_resource_instance.kms_resource_instance.guid
target_service_name = "kms"
roles = ["Reader"]
description = "TF-based test"
}

Performing some more tests with the Policy Management API and then Terraform, the following seems to work:
resource "ibm_iam_authorization_policy" "team_testpolicy" {
provider = ibm.team_account
source_service_account = data.ibm_iam_account_settings.dev_iam_account_settings.account_id
source_resource_instance_id = data.ibm_resource_instance.cos_resource_instance.guid
source_service_name = "cloud-object-storage"
resource_attributes {
name = "accountId"
operator = "stringEquals"
value = data.ibm_iam_account_settings.team_iam_account_settings.account_id
}
resource_attributes {
name = "serviceName"
operator = "stringEquals"
value = "kms"
}
resource_attributes {
name = "serviceInstance"
operator = "stringEquals"
value = ibm_resource_instance.kms_instance.guid
}
resource_attributes {
name = "keyRing"
operator = "stringEquals"
value = ibm_kms_key_rings.key_ring.key_ring_id
}
roles = ["Reader"]
description = "reverse policy in other account"
}
Using resource_attributes with the name attribute keyRing creates the right authorization policy.

Related

How to import multiple terraform resource instances?

I am trying to import already existing resources, these cannot be recreated, and there are many.
There is configuration that is basic to all resources and for some there are small changes.
I would like to import all the resources with a single command, doing it one by one is tedious and prone to mistakes.
Currently importing single resources with:
terraform import 'github_repository.repo_config["repo2"]' repo2
What would the import command look like if it were to import all of the resources?
The configuration is as follows:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 5.0"
}
}
}
provider "github" {
owner = "medecau"
}
variable "repo_config" {
type = map(object({
description = string
homepage_url = string
topics = list(string)
}))
default = {
"repo1" = {
description = "Repo 1"
homepage_url = "https://medecau.github.io/repo1/"
topics = ["topic1", "topic2", "topic3"]
}
"repo2" = {
description = "Repo 2"
homepage_url = null
topics = null
}
}
}
variable "default_repo_config" {
type = object({
description = string
homepage_url = string
topics = list(string)
})
default = {
description = ""
homepage_url = ""
topics = []
}
}
data "github_repositories" "medecau_repos" {
query = "user:medecau"
include_repo_id = true
}
resource "github_repository" "repo_config" {
# cast to set to remove duplicates
for_each = toset(data.github_repositories.medecau_repos.names)
name = each.value
description = lookup(var.repo_config, each.value, var.default_repo_config).description
homepage_url = lookup(var.repo_config, each.value, var.default_repo_config).homepage_url
topics = lookup(var.repo_config, each.value, var.default_repo_config).topics
has_issues = true
has_projects = false
has_wiki = false
vulnerability_alerts = true
security_and_analysis {
advanced_security {
status = "enabled"
}
secret_scanning {
status = "enabled"
}
secret_scanning_push_protection {
status = "enabled"
}
}
}
Importing multiple resources with single terraform import .. call is not supported natively as of now by terraform.
However terraformer could be something which can reduce your effort. Please go through the Terraformer Github documentation to verify if this works for you.
Note: Supports only organizational resources.

How can i create multiple fargate profiles with single namespace and different labels using terraform?

I am trying to create fargate profiles for EKS using terraform, the requirement is to create multiple fargate profiles bound to single namespace but different label.
I have defined the selector variable as below :
variable "selectors" {
description = "description"
type = list(object({
namespace = string
labels = any
}))
default = []
}
and the fargate module block as below :
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
for_each = {for namespace in var.selectors: namespace.namespace => namespace}
cluster_name = var.cluster_name
fargate_profile_name = format("%s-%s","fargate",each.value.namespace)
pod_execution_role_arn = aws_iam_role.eks_fargate_role.arn
subnet_ids = var.vpc_subnets
selector {
namespace = each.value.namespace
labels = each.value.labels
}
and calling the module as below :
selectors = [
{
namespace = "ns"
labels = {
Application = "fargate-1"
}
},
{
namespace = "ns"
labels = {
Application = "fargate-2"
}
}
]
When i try to run terraform plan, i am getting below error :
Two different items produced the key "jenkinsbuild" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
I tried giving (...) at the end of the for loop, this time i am getting another error as below :
each.value is tuple with 1 element
│
│ This value does not have any attributes.
I also defined selectors variable type as any, as well tried type casting the output to string(namespace) and object(labels), but no luck.
So could you please help me in achieving the same, It seems i am close but i am missing something here.
Thanks and Regards,
Sandeep.
In Terraform, when using for_each, the keys must be unique. If you do not have unique keys, then use count:
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
count = length(var.selectors)
selector {
namespace = var.selectors[count.index].namespace
labels = var.selectors[count.index].labels
}
...
}

nodSelector for "helm_release" resource in Terraform

I'm using Terraform for deploying cert-manager and ambassador.
Trying to understand how to use nodeSelector in terraform deployment and assign the helm chart I'm using for both services to a specific group node I have (using a label with key and value to assign)
resource "helm_release" "cert_manager" {
namespace = var.cert_manager_namespace
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = var.cert_manager_release_version
create_namespace = true
count = var.enable
set {
name = "controller."
}
set {
name = "controller.nodeselector"
value = ""
}
set {
name = "installCRDs" # Should only happen on the first attempt
value = "true"
}
set {
name = "securityContext.enabled"
value = "true"
}
Thie example above is me trying to assign it.
Any ideas?
Thanks!!
If Your nodeSelector location in values.yaml looks like this:
controller:
nodeSelector: {}
You should be setting it up this way:
set {
name = "controller.nodeSelector.dedicated"
value = "workloads"
}
Where dedicated is key and workloads is value.

Unable to create a namespace for AKS cluster using Terraform reports no such host

I have a module definition as below:
===
providers.tf
provider "kubernetes" {
#load_config_file = "false"
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
username = azurerm_kubernetes_cluster.aks.kube_config.0.username
password = azurerm_kubernetes_cluster.aks.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
outputs.tf
output "node_resource_group" {
value = azurerm_kubernetes_cluster.aks.node_resource_group
description = "The name of resource group where the AKS Nodes are created"
}
output "kubeConfig" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
description = "Kubeconfig of AKS Cluster"
}
output "host" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.host
}
output "client_key" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate
}
output "kube_config" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate
}
main.tf
resource "azurerm_log_analytics_workspace" "law" {
name = "${var.tla}-la-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = var.la_sku
retention_in_days = 30
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "${var.tla}-aks-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
sku_tier = var.sku_tier
private_cluster_enabled = var.enable_private_cluster
#api_server_authorized_ip_ranges = ""
default_node_pool {
name = "syspool001"
orchestrator_version = var.orchestrator_version
availability_zones = var.agents_availability_zones
enable_auto_scaling = true
node_count = var.default_pool_node_count
max_count = var.default_pool_max_node_count
min_count = var.default_pool_min_node_count
max_pods = var.default_pool_max_pod_count
vm_size = var.agents_size
enable_node_public_ip = false
os_disk_size_gb = var.default_pool_os_disk_size_gb
type = "VirtualMachineScaleSets"
vnet_subnet_id = var.vnet_subnet_id
node_labels = var.agents_labels
tags = merge(local.tags, var.agents_tags)
}
network_profile {
network_plugin = var.network_plugin
network_policy = var.network_policy
dns_service_ip = var.net_profile_dns_service_ip
docker_bridge_cidr = var.net_profile_docker_bridge_cidr
service_cidr = var.net_profile_service_cidr
}
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = var.rbac_aad_admin_group_object_ids
}
}
identity {
type = "SystemAssigned"
}
addon_profile {
azure_policy {
enabled = true
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = true
log_analytics_workspace_id = data.azurerm_log_analytics_workspace.log_analytics.id
}
}
tags = local.tags
lifecycle {
ignore_changes = [
default_node_pool
]
}
}
resource "azurerm_kubernetes_cluster_node_pool" "aksnp" {
lifecycle {
ignore_changes = [
node_count
]
}
for_each = var.additional_node_pools
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
name = each.value.node_os == "Windows" ? substr(each.key, 0, 6) : substr(each.key, 0, 12)
node_count = each.value.node_count
vm_size = each.value.vm_size
availability_zones = each.value.zones
max_pods = each.value.max_pods
os_disk_size_gb = each.value.os_disk_size_gb
os_type = each.value.node_os
vnet_subnet_id = var.vnet_subnet_id
node_taints = each.value.taints
enable_auto_scaling = each.value.cluster_auto_scaling
min_count = each.value.cluster_auto_scaling_min_count
max_count = each.value.cluster_auto_scaling_max_count
}
resource "kubernetes_namespace" "aks-namespace" {
metadata {
name = var.namespace
}
}
data.tf
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
}
lookups.tf
locals {
environment_lookup = {
dev = "d"
test = "t"
int = "i"
prod = "p"
prd = "p"
uat = "a"
poc = "d"
dr = "r"
lab = "l"
}
lookup_result = lookup(local.environment_lookup, var.environment)
tags = merge(
data.azurerm_resource_group.rg.tags, {
Directory = "tectcompany.com",
PrivateDNSZone = var.private_dns_zone,
Immutable = "False",
ManagedOS = "True",
}
)
}
data "azurerm_log_analytics_workspace" "log_analytics" {
name = "abc-az-lad2"
resource_group_name = "abc-dev-aae"
}
variables.tf
variable "secondary_region" {
description = "Is this resource being deployed into the secondary (pair) region?"
default = false
type = bool
}
variable "override_log_analytics_workspace" {
description = "Override the vm log analytics workspace"
type = string
default = null
}
variable "override_log_analytics_resource_group_name" {
description = "Overrides the log analytics resource group name"
type = string
default = null
}
variable "environment" {
description = "The name of environment for the AKS Cluster"
type = string
default = "dev"
}
variable "identifier" {
description = "The identifier for the AKS Cluster"
type = number
default = "001"
}
variable "kubernetes_version" {
description = "Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region"
type = string
default = "1.19.9"
}
variable "dns_prefix" {
description = "The dns prefix for the AKS Cluster"
type = string
default = "odessa-sandpit"
}
variable "orchestrator_version" {
description = "Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region"
type = string
default = null
}
variable "agents_availability_zones" {
description = "(Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created."
type = list(string)
default = null
}
variable "agents_size" {
default = "Standard_D4s_v3"
description = "The default virtual machine size for the Kubernetes agents"
type = string
}
variable "vnet_subnet_id" {
description = "(Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created."
type = string
default = null
}
variable "agents_labels" {
description = "(Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created."
type = map(string)
default = {}
}
variable "agents_tags" {
description = "(Optional) A mapping of tags to assign to the Node Pool."
type = map(string)
default = {}
}
variable "net_profile_dns_service_ip" {
description = "(Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_docker_bridge_cidr" {
description = "(Optional) IP address (in CIDR notation) used as the Docker bridge IP address on nodes. Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_service_cidr" {
description = "(Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created."
type = string
default = null
}
variable "rbac_aad_admin_group_object_ids" {
description = "Object ID of groups with admin access."
type = list(string)
default = null
}
variable "network_policy" {
description = "(Optional) The Network Policy to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "network_plugin" {
description = "(Optional) The Network Plugin to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "enable_private_cluster" {
description = "(Optional) Set this variable to true if you want Azure Kubernetes Cluster to be private."
default = true
}
variable "default_pool_node_count" {
description = "(Optional) The initial node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_node_count" {
description = "(Optional) The max node count for the default pool of AKS Cluster"
type = number
default = 6
}
variable "default_pool_min_node_count" {
description = "(Optional) The min node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_pod_count" {
description = "(Optional) The max pod count for the default pool of AKS Cluster"
type = number
default = 13
}
variable "default_pool_os_disk_size_gb" {
description = "(Optional) The size of os disk in gb for the nodes from default pool of AKS Cluster"
type = string
default = "64"
}
variable "additional_node_pools" {
type = map(object({
node_count = number
max_pods = number
os_disk_size_gb = number
vm_size = string
zones = list(string)
node_os = string
taints = list(string)
cluster_auto_scaling = bool
cluster_auto_scaling_min_count = number
cluster_auto_scaling_max_count = number
}))
}
variable "sku_tier" {
description = "(Optional)The SKU Tier that should be used for this Kubernetes Cluster, possible values Free or Paid"
type = string
default = "Paid"
validation {
condition = contains(["Free", "Paid"], var.sku_tier)
error_message = "SKU_TIER can only be either Paid or Free."
}
}
variable "la_sku" {
description = "(Optional)The SKU Tier that should be used for Log Analytics. Multiple values are possible."
type = string
default = "PerGB2018"
validation {
condition = contains(["Free", "PerNode", "Premium", "Standard", "Standalone", "Unlimited", "CapacityReservation", "PerGB2018"], var.la_sku)
error_message = "SKU_TIER for Log Analytics can be can only be either of Free, PerNode, Premium, Standard, Standalone, Unlimited, CapacityReservation and PerGB2018(Default Value)."
}
}
variable "resource_group_name" {
description = "Resource Group for deploying AKS Cluster"
type = string
}
variable "private_dns_zone" {
description = "DNS prefix for AKS Cluster"
type = string
default = "testcluster"
}
variable "tla" {
description = "Three Level acronym - three letter abbreviation for application"
type = string
default = ""
validation {
condition = length(var.tla) == 3
error_message = "The TLA should be precisely three characters."
}
}
variable "namespace"{
description = "AKS Namespace"
type = string
}
Finally, I am calling my module below to create the AKS cluster, LA, and Namespace for the AKS Cluster:
provider "azurerm" {
features {}
#version = "~> 2.53.0"
}
module "aks-cluster1" {
source = "../../"
resource_group_name = "pst-aks-sandpit-dev-1"
tla = "pqr"
additional_node_pools = {
pool1 = {
node_count = "1"
max_pods = "110"
os_disk_size_gb = "30"
vm_size = "Standard_D8s_v3"
zones = ["1","2","3"]
node_os = "Linux"
taints = ["kubernetes.io/os=windows:NoSchedule"]
cluster_auto_scaling = true
cluster_auto_scaling_min_count = "2"
cluster_auto_scaling_max_count = "4"
}
}
namespace = "sample-ns"
}
Problem:
I get an error that no such host when terraform attempts to create the cluster.
I think that it is not able to connect to the cluster but I could be wrong. I do not know how it handles internally.
Error: Post "https://testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io:443/api/v1/namespaces": dial tcp: lookup testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io: no such host
I'm one of the maintainers of the Terraform Kubernetes provider, and I see this particular issue pretty often. As a former devops person myself, I empathize with the struggle I keep seeing in this area. It's something I would really love to fix in the provider, if it were possible.
The issue you're facing is a limitation in Terraform core when passing an unknown value to a provider configuration block. To quote their docs:
You can use expressions in the values of these configuration arguments,
but can only reference values that are known before the configuration is applied.
When you make a change to the underlying infrastructure, such the AKS cluster in this case, you're passing an unknown value into the Kubernetes provider configuration block, since the full scope of the cluster infrastructure is not known until after the change has been applied to the AKS cluster.
Although I did write the initial guide to show that it can be possible to work around some of these issues, as you've found from experience, there are many edge cases that make it an unreliable and unintuitive process, to get the Kubernetes provider working alongside the underlying infrastructure. This is due to a long-standing limitation in Terraform, that can't be fixed in any provider, but we do have plans to smooth out the bumps a little by adding better error messages upfront, which would have saved you some headache in this case.
To solve this particular type of problem, the cluster infrastructure needs to be kept in a state separate from the Kubernetes and Helm provider resources. I have an example here which builds an AKS cluster in one apply and then manages the Kubernetes/Helm resources in a second apply. You can use this approach to build the most robust configuration for you particular use case:
https://github.com/hashicorp/terraform-provider-kubernetes/tree/e058e225e621f06e393bcb6407e7737fd43817bd/_examples/aks
I know this two-apply approach is inconvenient, which is why we continue to try and accommodate users in single-apply scenarios, and scenarios which contain the Kubernetes and cluster resources in the same Terraform state. However, until upstream Terraform can add support for this, the single-apply workflow will remain buggy and less reliable than separating cluster infrastructure from Kubernetes resources.
Most cases can be worked around using depends_on (to ensure the cluster is created before the Kubernetes resource), or by moving the cluster infrastructure into a separate module and running terraform state rm module.kubernetes-config or terraform apply -target=module.aks-cluster. But I think encouraging this kind of work-around will cause more headaches in the long run, as it puts the user in charge of figuring out when to use special one-off apply commands, rather than setting up Terraform to behave reliably and predictably from the start. Plus it can have unintended side-effects, like orphaning cloud resources.
Thanks for the additional detail. I see a few problems here. The first one is at the heart of your immediate problem:
variable "enable_private_cluster" {
description = "(Optional) Set this variable to true if you want Azure Kubernetes Cluster to be private."
default = true
}
Your cluster deployment is taking the default here, so your API endpoint is a private DNS entry in the zone privatelink.australiaeast.azmk8s.io:
Post "https://testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io:443/api/v1/namespaces"
The terraform kubernetes provider must be able to reach the API endpoint in order to deploy the namespace. However, it is unable to resolve the domain. For this to work, you will need to ensure that:
The private DNS zone exists in Azure
The private DNS zone is linked to the relevant virtual networks, including the host where you're running Terraform
The DNS resolver on the Terraform host can resolve the privatelink domain through the endpoint defined at https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16 - note that this may require forwarding the private domain if your network uses on-premises internal DNS.
You must ensure that your terraform host can reach the privatelink endpoint deployed by the cluster on TCP port 443
Azure privatelink and private DNS can be non-trivial to configure correctly, especially in a complex networking environment. So, you may encounter additional hurdles that I haven't covered here.
Alternatively, you may wish to deploy this cluster without using privatelink by setting this module option to false. This may be undesirable for security and compliance reasons, so be sure you understand what you're doing here:
enable_private_cluster = false
The next issue I encountered is:
Error: creating Managed Kubernetes Cluster "pqr-aks-d-1" (Resource Group "pst-aks-sandpit-dev-1"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="InsufficientAgentPoolMaxPodsPerAgentPool" Message="The AgentPoolProfile 'syspool001' has an invalid total maxPods(maxPods per node * node count), the total maxPods(13 * 824668498368) should be larger than 30. Please refer to aka.ms/aks-min-max-pod for more detail." Target="agentPoolProfile.kubernetesConfig.kubeletConfig.maxPods"
I overcame that by setting:
default_pool_max_pod_count = 30
The last issue is that you need to configure the kubernetes provider to have sufficient privileges to deploy the namespace:
│ Error: Unauthorized
│
│ with module.aks-cluster1.kubernetes_namespace.aks-namespace,
│ on ../../main.tf line 103, in resource "kubernetes_namespace" "aks-namespace":
│ 103: resource "kubernetes_namespace" "aks-namespace" {
One way to accomplish that is to use kube_admin_config instead of kube_config:
provider "kubernetes" {
#load_config_file = "false"
host = azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
username = azurerm_kubernetes_cluster.aks.kube_admin_config.0.username
password = azurerm_kubernetes_cluster.aks.kube_admin_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)
}
Difficult to say what the issue is since the code you posted is incomplete. For starters, you shouldn't be doing this:
provider "kubernetes" {
config_path = "~/.kube/config"
}
The AKS URL you posted doesn't exist, so I think that's pulling and old cluster default from your kube config
We should get the cluster details to access using data source and use the provider for non private cluster kub8 API can be accessed directly.
step 1 : data source
data "azurerm_kubernetes_cluster" "example" {
name = var.cluster_name
resource_group_name = azurerm_resource_group.rg.name
}
step 2 : provider
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.example.kube_config.0.host
username = data.azurerm_kubernetes_cluster.example.kube_config.0.username
password = data.azurerm_kubernetes_cluster.example.kube_config.0.password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
}

Terraform dynamic block

I'm having trouble creating a dynamic block in Terraform. I'm trying to create an ECS service using a module. In the module I want to specify that network_configuration block should be created only if a variable is present.
Here's my module code:
resource "aws_ecs_service" "service" {
name = var.name
cluster = var.cluster
task_definition = var.task_definition
desired_count = var.desired_count
launch_type = var.launch_type
load_balancer {
target_group_arn = var.lb_target_group
container_name = var.container_name
container_port = var.container_port
}
dynamic "network_configuration" {
for_each = var.network_config
content {
subnets = network_configuration.value["subnets"]
security_groups = network_configuration.value["security_groups"]
assign_public_ip = network_configuration.value["public_ip"]
}
}
}
Next is code for the actual service:
module "fargate_service" {
source = "./modules/ecs/service"
name = "fargate-service"
cluster = module.ecs_cluster.id
task_definition = module.fargate_task_definition.arn
desired_count = 2
launch_type = "FARGATE"
lb_target_group = module.target_group.arn
container_name = "fargate_definition"
container_port = 8000
network_config = local.fargate_network_config
}
Finally my locals file looks like this:
locals {
fargate_network_config = {
subnets = module.ec2_vpc.private_subnet_ids
public_ip = "false"
security_groups = [module.fargate_sg.id]
}
}
With the above configuration I wish to create one network_configiration block only when network_config variable is present. If I don't define it I want the module not to bother creating the block.
I'm getting Invalid index error.
network_configuration.value is tuple with 3 elements
The given key does not identify an element in this collection value: a number
is required.
What is wrong with my code? This is my first time using dynamic blocks in Terraform but I want to be able to understand it.
Thanks
So your locals should be as follows:
locals {
fargate_network_config = [
{
subnets = module.ec2_vpc.private_subnet_ids
public_ip = "false"
security_groups = [module.fargate_sg.id]
}
]
}
Then fix your variable network_config to be a list.
Finally your dynamic block:
dynamic "network_configuration" {
for_each = var.network_config
content {
subnets = lookup(network_configuration.value, "subnets", null)
security_groups = lookup(network_configuration.value, "security_groups", null)
assign_public_ip = lookup(network_configuration.value, "public_ip", null)
}
}
hope that helps