Terraform error when using dynamic block in for_each - terraform0.12+

I was trying to upgrade terraform to V0.12, and try to use for_each and dynamic block to implement more flexible resources. but there are some errors.
main.tf
resource "azurerm_eventhub" "entities" {
for_each = var.eventhub_ns_name
namespace_name = each.value.ns_name
resource_group_name = data.azurerm_resource_group.rg.name
dynamic "eh_name_associations"{
for_each = each.value.event_hub
content {
name = eh_name_associations.value.eh_name
partition_count = eh_name_associations.value.partition_count
message_retention = eh_name_associations.value.message_retention
}
}
}
vars.tf
variable "eventhub_ns_name" {
type = map(object({
ns_name = string,
event_hub = list(object({
eh_name = string
partition_count = number
message_retention = number
})),
}))
default ={
eventhub_ns_name001={
ns_name = "eventhub001",
event_hub = [{
eh_name = "survey"
partition_count = 1
message_retention = 1
},
{
eh_name = "wechat"
partition_count = 2
message_retention = 1
}],
},
}
}
The errors:
The argument "partition_count" is required, but no definition was found.
The argument "message_retention" is required, but no definition was found.
The argument "name" is required, but no definition was found.
Blocks of type "eh_name_associations" are not expected here.

Related

getting error 'Keyword "optional" is not a valid type constructor' in terraform with terraform version ">= 0.15"

In the current terraform pipeline, I am passing topics as a list
locals {
test_topics = [
{
name = "topic1"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 3
},
{
name = "topic2"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 4
},
{
name = "topic3"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 5
},
{
name = "topic4"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
},
{
name = "topic5"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 5
}
]
}
# example create topic it automatically assigns READ WRITE access to the service account and READ access to all PUBLIC topics
module "test_topics" {
source = "../kafka_topic"
topics = "${local.test_topics}"
environment = var.environment
data_domain = var.data_domain
service_account = var.service_account
}
and declaring variables in child modules like below
variable "topics" {
type = list(object({
name = string
is_public = bool
is_cleanup_policy_compact = bool
version = number
max_message_bytes = number
partition_count = number
}))
description = "list of topics with their configuration"
default = null
}
and in child main.tf we are creating the topics using following code
resource "kafka_topic" "topic" {
count = length(var.topics)
name = "${lookup(var.topics[count.index], "is_public") ? "public" :"private"}_${var.environment}_${var.data_domain}_${lookup(var.topics[count.index], "name")}_${lookup(var.topics[count.index], "version")}"
partitions = lookup(var.topics[count.index], "partition_count") == null ? 6 : "${lookup(var.topics[count.index], "partition_count")}"
replication_factor = 3
config = {
"cleanup.policy" = lookup(var.topics[count.index], "is_cleanup_policy_compact") ? "compact" : "delete"
"max.message.bytes" = lookup(var.topics[count.index], "max_message_bytes") != -1 ? "${lookup(var.topics[count.index], "max_message_bytes")}" : 1000012
}
}
but when running terraform plan I am getting following exception
attribute "partition_count" is required.
Note : I also used partition_count = optional(number) in declaring the variable in variable.tf (to keep that attribute as a optional field) but getting following error
Keyword "optional" is not a valid type constructor
as it might be due to the terraform version currently I am using which is ">= 0.12" but when I tried with the ">= 0.15" version, got the same error 'Keyword "optional" is not a valid type constructor' error.
Is there any way I can fix this issue?
Try to add this:
terraform {
experiments = [module_variable_optional_attrs]
}

OCI: Create nodes in Kubernetes nodepool with bastion agent configured

I'm trying to deploy a Kubernetes cluster in Oracle Cloud Infrastructure using Terraform.
I want that every node deployed (in private subnet) has the Bastion agent plugin activate in Cloud Agent.
But I cannot see how to define the details of the instance (setting agent_config in the node pool instances).
My code, until now is:
resource "oci_containerengine_cluster" "generated_oci_containerengine_cluster" {
compartment_id = var.cluster_compartment
endpoint_config {
is_public_ip_enabled = "true"
subnet_id = oci_core_subnet.oke_public_api.id
}
kubernetes_version = var.kubernetes_version
name = "josealbarran_labcloudnative_oke"
options {
kubernetes_network_config {
pods_cidr = "10.244.0.0/16"
services_cidr = "10.96.0.0/16"
}
service_lb_subnet_ids = [oci_core_subnet.oke_public_lb.id]
}
vcn_id = var.cluster_vcn
}
# Check doc: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
resource "oci_containerengine_node_pool" "node_pool01" {
cluster_id = "${oci_containerengine_cluster.generated_oci_containerengine_cluster.id}"
compartment_id = var.cluster_compartment
initial_node_labels {
key = "name"
value = "pool01"
}
kubernetes_version = var.kubernetes_version
name = "lab_cloud_native_oke_pool01"
node_config_details {
size = "${length(data.oci_identity_availability_domains.ads.availability_domains)}"
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.ads.availability_domains[*].name
content {
availability_domain = placement_configs.value
subnet_id = oci_core_subnet.oke_private_worker.id
}
}
}
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaalgodii3qx3mfasp6ai22bja7mabfwsxiwkzxx7lhdfdbbuyqcznq"
source_type = "IMAGE"
}
ssh_public_key = "ssh-rsa AAAAB3xxxxxxxx......."
timeouts {
create = "60m"
delete = "90m"
}
}
You can use the "cloudinit_config" to run the custom script in OKE node pool in OCI.
second_script_template = templatefile("${path.module}/cloudinit/second.template.sh",{})
More scripts like
data "cloudinit_config" "worker" {
gzip = false
base64_encode = true
part {
filename = "worker.sh"
content_type = "text/x-shellscript"
content = local.worker_script_template
}
part {
filename = "second.sh"
content_type = "text/x-shellscript"
content = local.second_script_template
}
part {
filename = "third.sh"
content_type = "text/x-shellscript"
content = local.third_script_template
}
}
Refer : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools
If you are looking forward to just edit the default script : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc

Terraform Cosmosdb multiple subnet

Terraform Cosmosdb multiple subnet
Getting error when add additional subnet for cosmosdb
Terraform Error:
Error: Invalid value for module argument
on manifests/backend.tf line 289, in module "cosmosdb_1":
289: vnet_subnet_id = ["azurerm_subnet.backend.id", "azurerm_subnet.application.id", "azurerm_subnet.frontend.id"]
The given value is not suitable for child module variable "vnet_subnet_id"
defined at modules/cosmosdb/variable.tf:26,1-26: element 0: object required.
Configuration for main.tf defined variable for subnets
dynamic "virtual_network_rule" {
for_each = var.virtual_network_rule != null ? toset(var.virtual_network_rule) : []
content {
id = [var.vnet_subnet_id]
ignore_missing_vnet_service_endpoint = virtual_network_rule.value.ignore_missing_vnet_service_endpoint
}
}
variable.tf file define variable type
variable "vnet_subnet_id" {
type = list(object({
id = string,
ignore_missing_vnet_service_endpoint = bool
}))
}
main configuration for vnet subnet defined in backend.tf
module "cosmosdb_1" {
depends_on = [module.vnet]
source = "./../modules/cosmosdb"
cosmodb_account_name = "${var.env}${var.cosmodb_account_name_1}"
resource_group_name = "${var.env}-bsai"
ip_range_filter = var.ip_range_filter
location = "${var.region}"
cosmosdb_name = var.cosmosdb_name_1
enable_automatic_failover = var.enable_automatic_failover
failover_location_secondary = var.failover_location_secondary
failover_priority_secondary = var.failover_priority_secondary
vnet_subnet_id = ["azurerm_subnet.backend.id", "azurerm_subnet.application.id", "azurerm_subnet.frontend.id"]
}
Solution:
Module: main.tf
dynamic "virtual_network_rule" {
for_each = var.vnet_subnet_id
content {
id = virtual_network_rule.value.id
ignore_missing_vnet_service_endpoint = true
}
}
variable.tf
variable "vnet_subnet_id" {
description = "List of subnets to be used in Cosmosdb."
type = list(object({
id = string
}))
}
Backend.tf
vnet_subnet_id = [{id = azurerm_subnet.backend.id}, {id = azurerm_subnet.application.id}, {id = azurerm_subnet.frontend.id}]

Tags format on Packer ec2-ami deployment

I'm trying out to create an amazon ec2 ami for the 1st time using Hashicorp Packer, however getting this failure on the tag creation, I already tried some retries on trial and error test for the format but still unlucky
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ packer init .
Error: Missing item separator
on variables.pkr.hcl line 28, in variable "tags":
27: default = [
28: "environment" : "testing"
Expected a comma to mark the beginning of the next item.
My code ec2.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat ec2.pkr.hcl
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "ec2" {
ami_name = "${var.ami_prefix}-${local.timestamp}"
instance_type = "t2.micro"
region = "us-east-1"
vpc_id = "${var.vpc}"
subnet_id = "${var.subnet}"
security_group_ids = ["${var.sg}"]
ssh_username = "ec2-boy-oh-boy"
source_ami_filter {
filters = {
name = "amzn2-ami-hvm-2.0*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["12345567896"]
}
launch_block_device_mappings = [
{
"device_name": "/dev/xvda",
"delete_on_termination": true
"volume_size": 10
"volume_type": "gp2"
}
]
run_tags = "${var.tags}"
run_volume_tags = "${var.tags}"
}
build {
sources = [
"source.amazon-ebs.ec2"
]
}
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$
Then my code variables.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat variables.pkr.hcl
locals {
timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}
variable "ami_prefix" {
type = string
default = "ec2-boy-oh-boy"
}
variable "vpc" {
type = string
default = "vpc-asd957d"
}
variable "subnet" {
type = string
default = "subnet-asd957d"
}
variable "sg" {
type = string
default = "sg-asd957d"
}
variable "tags" {
type = map
default = [
environment = "testing"
type = "none"
production = "later"
]
}
Your default value for the tags variable is of type list(string). Both the run_tags and run_volume_tags directives expect type map[string]string.
I was able to make the following changes to your variables file and run packer init successfully:
variable "tags" {
default = {
environment = "testing"
type = "none"
production = "later"
}
type = map(string)
}

terraform overwrite tag value in sub modules

I'm using terraform modules and passing a set of default_tags to sub-modules for consistent tagging. This is shown below, and has the advantage that sub-modules inherit their parent's tags, but can also add their own.
However what I'd also like to do is to be able to overwrite some of the inherited tags values, especially "Name". But I can't seem to make this work.
In the example below (terraform 13.5 with AWS) the first value specified for any tag, e.g. "scratch-test" in the root module is cascaded down to the sub-modules, and cannot be changed. So both the VPC Name tag and the subnet Name tag = "scratch-test".
How can I overwrite a tag value in a sub-module?
# root variables.tf
variable "default_tags" {
type = map
default = {
environment_type = "Dev Environment"
}
}
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(map(
"Name", "scratch-test"
), var.default_tags)
}
# ../../../../scratch/tags-pattern/main.tf
module "the_vpc" {
source = "../../terraform/tf-lib/network/vpc"
vpc_name = var.vpc_name
vpc_cidr = "10.0.0.0/24"
default_tags = merge(map(
"Name", "scratch-test-vpc",
"vpc_tag", "vpc"
), var.default_tags)
}
# Add a subnet
module "public_subnets" {
source = "../../terraform/tf-lib/network/subnet"
vpc_id = module.the_vpc.output_vpc_id
subnets = var.public_subnets
default_tags = merge(map(
"Name", "scratch-test-subnet",
"subnet_tag", "public"
), var.default_tags)
}
# tf-lib/network/vpc/main.tf
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = merge(map(
"module", "tf-lib/network/vpc"
), var.default_tags)
}
The variable.tf file for each module contains the statement:
variable "default_tags" {}
The merge function sets precedence by overwriting with the latter maps defined in the argument sequence:
merge takes an arbitrary number of maps or objects, and returns a single map or object that contains a merged set of elements from all arguments.
If more than one given map or object defines the same key or attribute, then the one that is later in the argument sequence takes precedence. If the argument types do not match, the resulting type will be an object matching the type structure of the attributes after the merging rules have been applied.
So to allow for you to override the default tags you can specify the default tags first like this:
# root variables.tf
variable "default_tags" {
type = map
default = {
environment_type = "Dev Environment"
}
}
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(var.default_tags, map(
"Name", "scratch-test"
))
}
The above might also look clearer using the {} map syntax rather than the map function too:
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(var.default_tags, {
Name = "scratch-test",
})
}
As mentioned in the map function documentation this function is deprecated and will eventually be removed:
This function is deprecated. From Terraform v0.12, the Terraform
language has built-in syntax for creating maps using the { and }
delimiters. Use the built-in syntax instead. The map function will be
removed in a future version of Terraform.
As of AWS Provider v3.38.0, this can be done more simply. See this announcement blog. Here is your request implemented with the new provider and no tags variables.
# root main.tf
provider "aws" {
region = var.region
default_tags {
tags = {
environment_type = "Dev Environment"
}
}
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
}
# ../../../../scratch/tags-pattern/main.tf
module "the_vpc" {
source = "../../terraform/tf-lib/network/vpc"
vpc_name = var.vpc_name
vpc_cidr = "10.0.0.0/24"
}
# tf-lib/network/vpc/main.tf
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "scratch-test-vpc"
module = "tf-lib/network/vpc"
vpc_tag = "vpc"
}
}
Here is the main documentation for default_tags.