How can i create multiple fargate profiles with single namespace and different labels using terraform? - kubernetes

I am trying to create fargate profiles for EKS using terraform, the requirement is to create multiple fargate profiles bound to single namespace but different label.
I have defined the selector variable as below :
variable "selectors" {
description = "description"
type = list(object({
namespace = string
labels = any
}))
default = []
}
and the fargate module block as below :
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
for_each = {for namespace in var.selectors: namespace.namespace => namespace}
cluster_name = var.cluster_name
fargate_profile_name = format("%s-%s","fargate",each.value.namespace)
pod_execution_role_arn = aws_iam_role.eks_fargate_role.arn
subnet_ids = var.vpc_subnets
selector {
namespace = each.value.namespace
labels = each.value.labels
}
and calling the module as below :
selectors = [
{
namespace = "ns"
labels = {
Application = "fargate-1"
}
},
{
namespace = "ns"
labels = {
Application = "fargate-2"
}
}
]
When i try to run terraform plan, i am getting below error :
Two different items produced the key "jenkinsbuild" in this 'for' expression. If duplicates are expected, use the ellipsis (...) after the value expression to enable grouping by key.
I tried giving (...) at the end of the for loop, this time i am getting another error as below :
each.value is tuple with 1 element
│
│ This value does not have any attributes.
I also defined selectors variable type as any, as well tried type casting the output to string(namespace) and object(labels), but no luck.
So could you please help me in achieving the same, It seems i am close but i am missing something here.
Thanks and Regards,
Sandeep.

In Terraform, when using for_each, the keys must be unique. If you do not have unique keys, then use count:
resource "aws_eks_fargate_profile" "eks_fargate_profile" {
count = length(var.selectors)
selector {
namespace = var.selectors[count.index].namespace
labels = var.selectors[count.index].labels
}
...
}

Related

Using dynamic Kubernetes annotations in Terraform based on a variable

I am trying to add a set of annotations based on a variable value.
So far, I am trying to add something like:
annotations = merge({
annotation1 = value
annotation2 = value
try(var.omitAnnotations == false) == null ? {} : {
annotation3 = value
annotation4 = value
}})
However, this doesn't appear to be working as expected, and in fact, the annotations are always added, regardless of the value in var.addAnnotations
How can I get this logic working, so that annotation3 & annotation4 are only added when var.addAnnotations is true?
Thanks in advance!
I would suggest moving any expressions that are too complicated to a local variable, because of visibility:
locals {
annotations = var.omitAnnotations ? {} : {
"annotation3" = "value3"
"annotation4" = "value4"
}
add_annotations = try(local.annotations, null)
}
What this will do is set the local.annotations variable to an empty map if var.omitAnnotations is true or to a map with annotations annotation3 and annotation4 defined if var.omitAnnotations is false. The local variable add_annotations will then have the value assigned based on using the try built-in function [1]. In the last step, you would just do the following:
annotations = merge(
{
"annotation1" = "value1"
"annotation2" = "value2"
},
local.add_annotations
)

Using terraform to fetch entity name under alias

I am trying to fetch all the entity names using data source vault_identity_entity, however unable to fetch the name of entity located under aliases.
Sample code:
'''
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup(jsondecode(data.vault_identity_entity.entity[each.key].data_json),“name”,{})
}
}
"data_json": "{\"aliases\":[{\"canonical_id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"creation_time\":\"2022-07-20T08:53:36.553988277Z\",\"custom_metadata\":null,\"id\":\"59fb8a9c-1c0c-0591-0f6e-1a153233e456\",\"last_update_time\":\"2022-07-20T08:53:36.553988277Z\",\"local\":false,\"merged_from_canonical_ids\":null,\"metadata\":null,\"mount_accessor\":\"auth_approle_12d1d8af\",\"mount_path\":\"auth/approle/\",\"mount_type\":\"approle\",\"name\":\"name.user#test.com\"}],\"creation_time\":\"2022-07-20T08:53:36.553982983Z\",\"direct_group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"disabled\":false,\"group_ids\":[\"e456cb46-2b51-737c-3277-64082352f47e\"],\"id\":\"37b4c764-a4ec-dcb7-c3c7-31cf9c51e456\",\"inherited_group_ids\":[],\"last_update_time\":\"2022-07-20T08:53:36.553982983Z\",\"merged_entity_ids\":null,\"metadata\":null,\"name\":\"entity_ec5c123\",\"namespace_id\":\"root\",\"policies\":[]}",
Above scripts returns entity id entity_ec5c123. Any suggestions to retrieve the name field under aliases, which has users email id.
Maybe something like this?
data “vault_identity_group” “group” {
group_name = “vaultadmin”
}
data “vault_identity_entity” “entity” {
for_each = toset(data.vault_identity_group.group.member_entity_ids)
entity_id = each.value
}
locals {
mount_accessor = "auth_approle_12d1d8af"
# mount_path = "auth/approle/"
aliases = {for k,v in data.vault_identity_entity.entity : k => jsondecode(v.data_json, "aliases") }
}
data “null_data_source” “values” {
for_each = data.vault_identity_entity.entity
inputs = {
ssh_user_details = lookup({for alias in lookup(local.aliases, each.key, "ent_missing") : alias.mount_accessor => alias.name}, local.mount_accessor, "ent_no_alias_on_auth_method")
}
}
Basically you want to do a couple lookups here, you can simplify this if you can guarantee that each entity will only have a single alias, but otherwise you should probably be looking up the alias for a specific mount_accessor and discarding the other entries.
Haven't really done a bunch of testing with this code, but you should be able to run terraform console after doing an init on your workspace and figure out what the data structs look like if you have issues.

Unable to create a namespace for AKS cluster using Terraform reports no such host

I have a module definition as below:
===
providers.tf
provider "kubernetes" {
#load_config_file = "false"
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
username = azurerm_kubernetes_cluster.aks.kube_config.0.username
password = azurerm_kubernetes_cluster.aks.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
outputs.tf
output "node_resource_group" {
value = azurerm_kubernetes_cluster.aks.node_resource_group
description = "The name of resource group where the AKS Nodes are created"
}
output "kubeConfig" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
description = "Kubeconfig of AKS Cluster"
}
output "host" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.host
}
output "client_key" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate
}
output "kube_config" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate
}
main.tf
resource "azurerm_log_analytics_workspace" "law" {
name = "${var.tla}-la-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = var.la_sku
retention_in_days = 30
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "${var.tla}-aks-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
sku_tier = var.sku_tier
private_cluster_enabled = var.enable_private_cluster
#api_server_authorized_ip_ranges = ""
default_node_pool {
name = "syspool001"
orchestrator_version = var.orchestrator_version
availability_zones = var.agents_availability_zones
enable_auto_scaling = true
node_count = var.default_pool_node_count
max_count = var.default_pool_max_node_count
min_count = var.default_pool_min_node_count
max_pods = var.default_pool_max_pod_count
vm_size = var.agents_size
enable_node_public_ip = false
os_disk_size_gb = var.default_pool_os_disk_size_gb
type = "VirtualMachineScaleSets"
vnet_subnet_id = var.vnet_subnet_id
node_labels = var.agents_labels
tags = merge(local.tags, var.agents_tags)
}
network_profile {
network_plugin = var.network_plugin
network_policy = var.network_policy
dns_service_ip = var.net_profile_dns_service_ip
docker_bridge_cidr = var.net_profile_docker_bridge_cidr
service_cidr = var.net_profile_service_cidr
}
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = var.rbac_aad_admin_group_object_ids
}
}
identity {
type = "SystemAssigned"
}
addon_profile {
azure_policy {
enabled = true
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = true
log_analytics_workspace_id = data.azurerm_log_analytics_workspace.log_analytics.id
}
}
tags = local.tags
lifecycle {
ignore_changes = [
default_node_pool
]
}
}
resource "azurerm_kubernetes_cluster_node_pool" "aksnp" {
lifecycle {
ignore_changes = [
node_count
]
}
for_each = var.additional_node_pools
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
name = each.value.node_os == "Windows" ? substr(each.key, 0, 6) : substr(each.key, 0, 12)
node_count = each.value.node_count
vm_size = each.value.vm_size
availability_zones = each.value.zones
max_pods = each.value.max_pods
os_disk_size_gb = each.value.os_disk_size_gb
os_type = each.value.node_os
vnet_subnet_id = var.vnet_subnet_id
node_taints = each.value.taints
enable_auto_scaling = each.value.cluster_auto_scaling
min_count = each.value.cluster_auto_scaling_min_count
max_count = each.value.cluster_auto_scaling_max_count
}
resource "kubernetes_namespace" "aks-namespace" {
metadata {
name = var.namespace
}
}
data.tf
data "azurerm_resource_group" "rg" {
name = var.resource_group_name
}
lookups.tf
locals {
environment_lookup = {
dev = "d"
test = "t"
int = "i"
prod = "p"
prd = "p"
uat = "a"
poc = "d"
dr = "r"
lab = "l"
}
lookup_result = lookup(local.environment_lookup, var.environment)
tags = merge(
data.azurerm_resource_group.rg.tags, {
Directory = "tectcompany.com",
PrivateDNSZone = var.private_dns_zone,
Immutable = "False",
ManagedOS = "True",
}
)
}
data "azurerm_log_analytics_workspace" "log_analytics" {
name = "abc-az-lad2"
resource_group_name = "abc-dev-aae"
}
variables.tf
variable "secondary_region" {
description = "Is this resource being deployed into the secondary (pair) region?"
default = false
type = bool
}
variable "override_log_analytics_workspace" {
description = "Override the vm log analytics workspace"
type = string
default = null
}
variable "override_log_analytics_resource_group_name" {
description = "Overrides the log analytics resource group name"
type = string
default = null
}
variable "environment" {
description = "The name of environment for the AKS Cluster"
type = string
default = "dev"
}
variable "identifier" {
description = "The identifier for the AKS Cluster"
type = number
default = "001"
}
variable "kubernetes_version" {
description = "Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region"
type = string
default = "1.19.9"
}
variable "dns_prefix" {
description = "The dns prefix for the AKS Cluster"
type = string
default = "odessa-sandpit"
}
variable "orchestrator_version" {
description = "Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region"
type = string
default = null
}
variable "agents_availability_zones" {
description = "(Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created."
type = list(string)
default = null
}
variable "agents_size" {
default = "Standard_D4s_v3"
description = "The default virtual machine size for the Kubernetes agents"
type = string
}
variable "vnet_subnet_id" {
description = "(Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created."
type = string
default = null
}
variable "agents_labels" {
description = "(Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created."
type = map(string)
default = {}
}
variable "agents_tags" {
description = "(Optional) A mapping of tags to assign to the Node Pool."
type = map(string)
default = {}
}
variable "net_profile_dns_service_ip" {
description = "(Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_docker_bridge_cidr" {
description = "(Optional) IP address (in CIDR notation) used as the Docker bridge IP address on nodes. Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_service_cidr" {
description = "(Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created."
type = string
default = null
}
variable "rbac_aad_admin_group_object_ids" {
description = "Object ID of groups with admin access."
type = list(string)
default = null
}
variable "network_policy" {
description = "(Optional) The Network Policy to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "network_plugin" {
description = "(Optional) The Network Plugin to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "enable_private_cluster" {
description = "(Optional) Set this variable to true if you want Azure Kubernetes Cluster to be private."
default = true
}
variable "default_pool_node_count" {
description = "(Optional) The initial node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_node_count" {
description = "(Optional) The max node count for the default pool of AKS Cluster"
type = number
default = 6
}
variable "default_pool_min_node_count" {
description = "(Optional) The min node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_pod_count" {
description = "(Optional) The max pod count for the default pool of AKS Cluster"
type = number
default = 13
}
variable "default_pool_os_disk_size_gb" {
description = "(Optional) The size of os disk in gb for the nodes from default pool of AKS Cluster"
type = string
default = "64"
}
variable "additional_node_pools" {
type = map(object({
node_count = number
max_pods = number
os_disk_size_gb = number
vm_size = string
zones = list(string)
node_os = string
taints = list(string)
cluster_auto_scaling = bool
cluster_auto_scaling_min_count = number
cluster_auto_scaling_max_count = number
}))
}
variable "sku_tier" {
description = "(Optional)The SKU Tier that should be used for this Kubernetes Cluster, possible values Free or Paid"
type = string
default = "Paid"
validation {
condition = contains(["Free", "Paid"], var.sku_tier)
error_message = "SKU_TIER can only be either Paid or Free."
}
}
variable "la_sku" {
description = "(Optional)The SKU Tier that should be used for Log Analytics. Multiple values are possible."
type = string
default = "PerGB2018"
validation {
condition = contains(["Free", "PerNode", "Premium", "Standard", "Standalone", "Unlimited", "CapacityReservation", "PerGB2018"], var.la_sku)
error_message = "SKU_TIER for Log Analytics can be can only be either of Free, PerNode, Premium, Standard, Standalone, Unlimited, CapacityReservation and PerGB2018(Default Value)."
}
}
variable "resource_group_name" {
description = "Resource Group for deploying AKS Cluster"
type = string
}
variable "private_dns_zone" {
description = "DNS prefix for AKS Cluster"
type = string
default = "testcluster"
}
variable "tla" {
description = "Three Level acronym - three letter abbreviation for application"
type = string
default = ""
validation {
condition = length(var.tla) == 3
error_message = "The TLA should be precisely three characters."
}
}
variable "namespace"{
description = "AKS Namespace"
type = string
}
Finally, I am calling my module below to create the AKS cluster, LA, and Namespace for the AKS Cluster:
provider "azurerm" {
features {}
#version = "~> 2.53.0"
}
module "aks-cluster1" {
source = "../../"
resource_group_name = "pst-aks-sandpit-dev-1"
tla = "pqr"
additional_node_pools = {
pool1 = {
node_count = "1"
max_pods = "110"
os_disk_size_gb = "30"
vm_size = "Standard_D8s_v3"
zones = ["1","2","3"]
node_os = "Linux"
taints = ["kubernetes.io/os=windows:NoSchedule"]
cluster_auto_scaling = true
cluster_auto_scaling_min_count = "2"
cluster_auto_scaling_max_count = "4"
}
}
namespace = "sample-ns"
}
Problem:
I get an error that no such host when terraform attempts to create the cluster.
I think that it is not able to connect to the cluster but I could be wrong. I do not know how it handles internally.
Error: Post "https://testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io:443/api/v1/namespaces": dial tcp: lookup testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io: no such host
I'm one of the maintainers of the Terraform Kubernetes provider, and I see this particular issue pretty often. As a former devops person myself, I empathize with the struggle I keep seeing in this area. It's something I would really love to fix in the provider, if it were possible.
The issue you're facing is a limitation in Terraform core when passing an unknown value to a provider configuration block. To quote their docs:
You can use expressions in the values of these configuration arguments,
but can only reference values that are known before the configuration is applied.
When you make a change to the underlying infrastructure, such the AKS cluster in this case, you're passing an unknown value into the Kubernetes provider configuration block, since the full scope of the cluster infrastructure is not known until after the change has been applied to the AKS cluster.
Although I did write the initial guide to show that it can be possible to work around some of these issues, as you've found from experience, there are many edge cases that make it an unreliable and unintuitive process, to get the Kubernetes provider working alongside the underlying infrastructure. This is due to a long-standing limitation in Terraform, that can't be fixed in any provider, but we do have plans to smooth out the bumps a little by adding better error messages upfront, which would have saved you some headache in this case.
To solve this particular type of problem, the cluster infrastructure needs to be kept in a state separate from the Kubernetes and Helm provider resources. I have an example here which builds an AKS cluster in one apply and then manages the Kubernetes/Helm resources in a second apply. You can use this approach to build the most robust configuration for you particular use case:
https://github.com/hashicorp/terraform-provider-kubernetes/tree/e058e225e621f06e393bcb6407e7737fd43817bd/_examples/aks
I know this two-apply approach is inconvenient, which is why we continue to try and accommodate users in single-apply scenarios, and scenarios which contain the Kubernetes and cluster resources in the same Terraform state. However, until upstream Terraform can add support for this, the single-apply workflow will remain buggy and less reliable than separating cluster infrastructure from Kubernetes resources.
Most cases can be worked around using depends_on (to ensure the cluster is created before the Kubernetes resource), or by moving the cluster infrastructure into a separate module and running terraform state rm module.kubernetes-config or terraform apply -target=module.aks-cluster. But I think encouraging this kind of work-around will cause more headaches in the long run, as it puts the user in charge of figuring out when to use special one-off apply commands, rather than setting up Terraform to behave reliably and predictably from the start. Plus it can have unintended side-effects, like orphaning cloud resources.
Thanks for the additional detail. I see a few problems here. The first one is at the heart of your immediate problem:
variable "enable_private_cluster" {
description = "(Optional) Set this variable to true if you want Azure Kubernetes Cluster to be private."
default = true
}
Your cluster deployment is taking the default here, so your API endpoint is a private DNS entry in the zone privatelink.australiaeast.azmk8s.io:
Post "https://testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io:443/api/v1/namespaces"
The terraform kubernetes provider must be able to reach the API endpoint in order to deploy the namespace. However, it is unable to resolve the domain. For this to work, you will need to ensure that:
The private DNS zone exists in Azure
The private DNS zone is linked to the relevant virtual networks, including the host where you're running Terraform
The DNS resolver on the Terraform host can resolve the privatelink domain through the endpoint defined at https://learn.microsoft.com/en-us/azure/virtual-network/what-is-ip-address-168-63-129-16 - note that this may require forwarding the private domain if your network uses on-premises internal DNS.
You must ensure that your terraform host can reach the privatelink endpoint deployed by the cluster on TCP port 443
Azure privatelink and private DNS can be non-trivial to configure correctly, especially in a complex networking environment. So, you may encounter additional hurdles that I haven't covered here.
Alternatively, you may wish to deploy this cluster without using privatelink by setting this module option to false. This may be undesirable for security and compliance reasons, so be sure you understand what you're doing here:
enable_private_cluster = false
The next issue I encountered is:
Error: creating Managed Kubernetes Cluster "pqr-aks-d-1" (Resource Group "pst-aks-sandpit-dev-1"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="InsufficientAgentPoolMaxPodsPerAgentPool" Message="The AgentPoolProfile 'syspool001' has an invalid total maxPods(maxPods per node * node count), the total maxPods(13 * 824668498368) should be larger than 30. Please refer to aka.ms/aks-min-max-pod for more detail." Target="agentPoolProfile.kubernetesConfig.kubeletConfig.maxPods"
I overcame that by setting:
default_pool_max_pod_count = 30
The last issue is that you need to configure the kubernetes provider to have sufficient privileges to deploy the namespace:
│ Error: Unauthorized
│
│ with module.aks-cluster1.kubernetes_namespace.aks-namespace,
│ on ../../main.tf line 103, in resource "kubernetes_namespace" "aks-namespace":
│ 103: resource "kubernetes_namespace" "aks-namespace" {
One way to accomplish that is to use kube_admin_config instead of kube_config:
provider "kubernetes" {
#load_config_file = "false"
host = azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
username = azurerm_kubernetes_cluster.aks.kube_admin_config.0.username
password = azurerm_kubernetes_cluster.aks.kube_admin_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)
}
Difficult to say what the issue is since the code you posted is incomplete. For starters, you shouldn't be doing this:
provider "kubernetes" {
config_path = "~/.kube/config"
}
The AKS URL you posted doesn't exist, so I think that's pulling and old cluster default from your kube config
We should get the cluster details to access using data source and use the provider for non private cluster kub8 API can be accessed directly.
step 1 : data source
data "azurerm_kubernetes_cluster" "example" {
name = var.cluster_name
resource_group_name = azurerm_resource_group.rg.name
}
step 2 : provider
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.example.kube_config.0.host
username = data.azurerm_kubernetes_cluster.example.kube_config.0.username
password = data.azurerm_kubernetes_cluster.example.kube_config.0.password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
}

Terraform dynamic block

I'm having trouble creating a dynamic block in Terraform. I'm trying to create an ECS service using a module. In the module I want to specify that network_configuration block should be created only if a variable is present.
Here's my module code:
resource "aws_ecs_service" "service" {
name = var.name
cluster = var.cluster
task_definition = var.task_definition
desired_count = var.desired_count
launch_type = var.launch_type
load_balancer {
target_group_arn = var.lb_target_group
container_name = var.container_name
container_port = var.container_port
}
dynamic "network_configuration" {
for_each = var.network_config
content {
subnets = network_configuration.value["subnets"]
security_groups = network_configuration.value["security_groups"]
assign_public_ip = network_configuration.value["public_ip"]
}
}
}
Next is code for the actual service:
module "fargate_service" {
source = "./modules/ecs/service"
name = "fargate-service"
cluster = module.ecs_cluster.id
task_definition = module.fargate_task_definition.arn
desired_count = 2
launch_type = "FARGATE"
lb_target_group = module.target_group.arn
container_name = "fargate_definition"
container_port = 8000
network_config = local.fargate_network_config
}
Finally my locals file looks like this:
locals {
fargate_network_config = {
subnets = module.ec2_vpc.private_subnet_ids
public_ip = "false"
security_groups = [module.fargate_sg.id]
}
}
With the above configuration I wish to create one network_configiration block only when network_config variable is present. If I don't define it I want the module not to bother creating the block.
I'm getting Invalid index error.
network_configuration.value is tuple with 3 elements
The given key does not identify an element in this collection value: a number
is required.
What is wrong with my code? This is my first time using dynamic blocks in Terraform but I want to be able to understand it.
Thanks
So your locals should be as follows:
locals {
fargate_network_config = [
{
subnets = module.ec2_vpc.private_subnet_ids
public_ip = "false"
security_groups = [module.fargate_sg.id]
}
]
}
Then fix your variable network_config to be a list.
Finally your dynamic block:
dynamic "network_configuration" {
for_each = var.network_config
content {
subnets = lookup(network_configuration.value, "subnets", null)
security_groups = lookup(network_configuration.value, "security_groups", null)
assign_public_ip = lookup(network_configuration.value, "public_ip", null)
}
}
hope that helps

Grails: how to set a parameter for the whole group in UrlMappings

Our Grails 2.5 web application has REST API.
We need to start a new version of the API so a group was created in UrlMappings.groovy like this:
group ("/v2") {
"/documents"(resources: 'document') {
"/meta"(resource: 'documentInfo', includes: ['show', 'update'])
}
"/folders"(resources: 'folder')
apiVersion = 2
}
The thing is that parameter apiVersion is not defined in params in controller actions.
The parameter is properly set if it is defined in each of the resources like this:
group ("/v2") {
"/documents"(resources: 'document') {
"/meta"(resource: 'documentInfo', includes: ['show', 'update']) {
apiVersion = 2
}
apiVersion = 2
}
"/folders"(resources: 'folder') {
apiVersion = 2
}
}
How can I properly define a parameter on the group level?
Workaround will be using namespace... although my attempt to define it at group level wont work but you can give a try. But defining at resource level works even for nested resources.
group ("/v2") {
"/documents"(resources: 'document', namespace:'v2') {
"/meta"(resource: 'documentInfo', includes: ['show', 'update'])
}
"/folders"(resources: 'folder', , namespace:'v2')
}