Tags format on Packer ec2-ami deployment - tags

I'm trying out to create an amazon ec2 ami for the 1st time using Hashicorp Packer, however getting this failure on the tag creation, I already tried some retries on trial and error test for the format but still unlucky
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ packer init .
Error: Missing item separator
on variables.pkr.hcl line 28, in variable "tags":
27: default = [
28: "environment" : "testing"
Expected a comma to mark the beginning of the next item.
My code ec2.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat ec2.pkr.hcl
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "ec2" {
ami_name = "${var.ami_prefix}-${local.timestamp}"
instance_type = "t2.micro"
region = "us-east-1"
vpc_id = "${var.vpc}"
subnet_id = "${var.subnet}"
security_group_ids = ["${var.sg}"]
ssh_username = "ec2-boy-oh-boy"
source_ami_filter {
filters = {
name = "amzn2-ami-hvm-2.0*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["12345567896"]
}
launch_block_device_mappings = [
{
"device_name": "/dev/xvda",
"delete_on_termination": true
"volume_size": 10
"volume_type": "gp2"
}
]
run_tags = "${var.tags}"
run_volume_tags = "${var.tags}"
}
build {
sources = [
"source.amazon-ebs.ec2"
]
}
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$
Then my code variables.pkr.hcl looks like this:
[ec2-boy-oh-boy#ip-172-168-99-23 pogi]$ cat variables.pkr.hcl
locals {
timestamp = regex_replace(timestamp(), "[- TZ:]", "")
}
variable "ami_prefix" {
type = string
default = "ec2-boy-oh-boy"
}
variable "vpc" {
type = string
default = "vpc-asd957d"
}
variable "subnet" {
type = string
default = "subnet-asd957d"
}
variable "sg" {
type = string
default = "sg-asd957d"
}
variable "tags" {
type = map
default = [
environment = "testing"
type = "none"
production = "later"
]
}

Your default value for the tags variable is of type list(string). Both the run_tags and run_volume_tags directives expect type map[string]string.
I was able to make the following changes to your variables file and run packer init successfully:
variable "tags" {
default = {
environment = "testing"
type = "none"
production = "later"
}
type = map(string)
}

Related

Terraform import error retrieving Virtual Machine Scale Set created from an image

I'm trying to import a Linux VM Scale Set that was deployed in the Azure Portal from a custom shared image, also created in the portal. I'm using the following command:
terraform import module.vm_scaleset.azurerm_linux_virtual_machine_scale_set.vmscaleset /subscriptions/00000000-0000-0000-0000-000000000000
/resourceGroups/myrg/providers/Microsoft.Compute/virtualMachineScaleSets/vmss1
Import fails with the following error:
Error: retrieving Virtual Machine Scale Set "vmss1" (Resource Group "myrg"): properties.virtualMachineProfile.osProfile was nil
Below is my VM Scale set module code
data "azurerm_lb" "loadbalancer" {
name = var.lbName
resource_group_name = var.rgName
}
data "azurerm_lb_backend_address_pool" "addresspool" {
loadbalancer_id = data.azurerm_lb.loadbalancer.id
name = var.lbAddressPool
}
data "azurerm_shared_image" "scaleset_image" {
provider = azurerm.ist
name = var.scaleset_image_name
gallery_name = var.scaleset_image_gallery
resource_group_name = var.scaleset_image_rgname
}
resource "azurerm_linux_virtual_machine_scale_set" "vmscaleset" {
name = var.vmssName
resource_group_name = var.rgName
location = var.location
sku = var.vms_sku
instances = var.vm_instances
admin_username = azurerm_key_vault_secret.vmssusername.value
admin_password = azurerm_key_vault_secret.vmsspassword.value
disable_password_authentication = false
zones = var.vmss_zones
source_image_id = data.azurerm_shared_image.scaleset_image.id
tags = module.vmss_tags.tags
os_disk {
storage_account_type = var.vmss_osdisk_storage
caching = "ReadWrite"
create_option = "FromImage"
}
data_disk {
storage_account_type = "StandardSSD_LRS"
caching = "None"
disk_size_gb = 1000
lun = 10
create_option = "FromImage"
}
network_interface {
name = format("nic-%s-001", var.vmssName)
primary = true
enable_accelerated_networking = true
ip_configuration {
name = "internal"
load_balancer_backend_address_pool_ids = [data.azurerm_lb_backend_address_pool.addresspool.id]
primary = true
subnet_id = var.subnet_id
}
}
lifecycle {
ignore_changes = [
tags
]
}
}
The source image was created from a Linux RHEL 8.6 VM that included a custom node.js script.
Examination of the Scale Set in the portal does indeed show that the virtualMachineProfile.osProfile is absent.
I haven't been able to find a solution on any forum. Is there any way to ignore the error and import the Scale Set anyway?

OCI: Create nodes in Kubernetes nodepool with bastion agent configured

I'm trying to deploy a Kubernetes cluster in Oracle Cloud Infrastructure using Terraform.
I want that every node deployed (in private subnet) has the Bastion agent plugin activate in Cloud Agent.
But I cannot see how to define the details of the instance (setting agent_config in the node pool instances).
My code, until now is:
resource "oci_containerengine_cluster" "generated_oci_containerengine_cluster" {
compartment_id = var.cluster_compartment
endpoint_config {
is_public_ip_enabled = "true"
subnet_id = oci_core_subnet.oke_public_api.id
}
kubernetes_version = var.kubernetes_version
name = "josealbarran_labcloudnative_oke"
options {
kubernetes_network_config {
pods_cidr = "10.244.0.0/16"
services_cidr = "10.96.0.0/16"
}
service_lb_subnet_ids = [oci_core_subnet.oke_public_lb.id]
}
vcn_id = var.cluster_vcn
}
# Check doc: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
resource "oci_containerengine_node_pool" "node_pool01" {
cluster_id = "${oci_containerengine_cluster.generated_oci_containerengine_cluster.id}"
compartment_id = var.cluster_compartment
initial_node_labels {
key = "name"
value = "pool01"
}
kubernetes_version = var.kubernetes_version
name = "lab_cloud_native_oke_pool01"
node_config_details {
size = "${length(data.oci_identity_availability_domains.ads.availability_domains)}"
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.ads.availability_domains[*].name
content {
availability_domain = placement_configs.value
subnet_id = oci_core_subnet.oke_private_worker.id
}
}
}
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaalgodii3qx3mfasp6ai22bja7mabfwsxiwkzxx7lhdfdbbuyqcznq"
source_type = "IMAGE"
}
ssh_public_key = "ssh-rsa AAAAB3xxxxxxxx......."
timeouts {
create = "60m"
delete = "90m"
}
}
You can use the "cloudinit_config" to run the custom script in OKE node pool in OCI.
second_script_template = templatefile("${path.module}/cloudinit/second.template.sh",{})
More scripts like
data "cloudinit_config" "worker" {
gzip = false
base64_encode = true
part {
filename = "worker.sh"
content_type = "text/x-shellscript"
content = local.worker_script_template
}
part {
filename = "second.sh"
content_type = "text/x-shellscript"
content = local.second_script_template
}
part {
filename = "third.sh"
content_type = "text/x-shellscript"
content = local.third_script_template
}
}
Refer : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools
If you are looking forward to just edit the default script : https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc

Terraform Cosmosdb multiple subnet

Terraform Cosmosdb multiple subnet
Getting error when add additional subnet for cosmosdb
Terraform Error:
Error: Invalid value for module argument
on manifests/backend.tf line 289, in module "cosmosdb_1":
289: vnet_subnet_id = ["azurerm_subnet.backend.id", "azurerm_subnet.application.id", "azurerm_subnet.frontend.id"]
The given value is not suitable for child module variable "vnet_subnet_id"
defined at modules/cosmosdb/variable.tf:26,1-26: element 0: object required.
Configuration for main.tf defined variable for subnets
dynamic "virtual_network_rule" {
for_each = var.virtual_network_rule != null ? toset(var.virtual_network_rule) : []
content {
id = [var.vnet_subnet_id]
ignore_missing_vnet_service_endpoint = virtual_network_rule.value.ignore_missing_vnet_service_endpoint
}
}
variable.tf file define variable type
variable "vnet_subnet_id" {
type = list(object({
id = string,
ignore_missing_vnet_service_endpoint = bool
}))
}
main configuration for vnet subnet defined in backend.tf
module "cosmosdb_1" {
depends_on = [module.vnet]
source = "./../modules/cosmosdb"
cosmodb_account_name = "${var.env}${var.cosmodb_account_name_1}"
resource_group_name = "${var.env}-bsai"
ip_range_filter = var.ip_range_filter
location = "${var.region}"
cosmosdb_name = var.cosmosdb_name_1
enable_automatic_failover = var.enable_automatic_failover
failover_location_secondary = var.failover_location_secondary
failover_priority_secondary = var.failover_priority_secondary
vnet_subnet_id = ["azurerm_subnet.backend.id", "azurerm_subnet.application.id", "azurerm_subnet.frontend.id"]
}
Solution:
Module: main.tf
dynamic "virtual_network_rule" {
for_each = var.vnet_subnet_id
content {
id = virtual_network_rule.value.id
ignore_missing_vnet_service_endpoint = true
}
}
variable.tf
variable "vnet_subnet_id" {
description = "List of subnets to be used in Cosmosdb."
type = list(object({
id = string
}))
}
Backend.tf
vnet_subnet_id = [{id = azurerm_subnet.backend.id}, {id = azurerm_subnet.application.id}, {id = azurerm_subnet.frontend.id}]

Terraform error when using dynamic block in for_each

I was trying to upgrade terraform to V0.12, and try to use for_each and dynamic block to implement more flexible resources. but there are some errors.
main.tf
resource "azurerm_eventhub" "entities" {
for_each = var.eventhub_ns_name
namespace_name = each.value.ns_name
resource_group_name = data.azurerm_resource_group.rg.name
dynamic "eh_name_associations"{
for_each = each.value.event_hub
content {
name = eh_name_associations.value.eh_name
partition_count = eh_name_associations.value.partition_count
message_retention = eh_name_associations.value.message_retention
}
}
}
vars.tf
variable "eventhub_ns_name" {
type = map(object({
ns_name = string,
event_hub = list(object({
eh_name = string
partition_count = number
message_retention = number
})),
}))
default ={
eventhub_ns_name001={
ns_name = "eventhub001",
event_hub = [{
eh_name = "survey"
partition_count = 1
message_retention = 1
},
{
eh_name = "wechat"
partition_count = 2
message_retention = 1
}],
},
}
}
The errors:
The argument "partition_count" is required, but no definition was found.
The argument "message_retention" is required, but no definition was found.
The argument "name" is required, but no definition was found.
Blocks of type "eh_name_associations" are not expected here.

terraform overwrite tag value in sub modules

I'm using terraform modules and passing a set of default_tags to sub-modules for consistent tagging. This is shown below, and has the advantage that sub-modules inherit their parent's tags, but can also add their own.
However what I'd also like to do is to be able to overwrite some of the inherited tags values, especially "Name". But I can't seem to make this work.
In the example below (terraform 13.5 with AWS) the first value specified for any tag, e.g. "scratch-test" in the root module is cascaded down to the sub-modules, and cannot be changed. So both the VPC Name tag and the subnet Name tag = "scratch-test".
How can I overwrite a tag value in a sub-module?
# root variables.tf
variable "default_tags" {
type = map
default = {
environment_type = "Dev Environment"
}
}
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(map(
"Name", "scratch-test"
), var.default_tags)
}
# ../../../../scratch/tags-pattern/main.tf
module "the_vpc" {
source = "../../terraform/tf-lib/network/vpc"
vpc_name = var.vpc_name
vpc_cidr = "10.0.0.0/24"
default_tags = merge(map(
"Name", "scratch-test-vpc",
"vpc_tag", "vpc"
), var.default_tags)
}
# Add a subnet
module "public_subnets" {
source = "../../terraform/tf-lib/network/subnet"
vpc_id = module.the_vpc.output_vpc_id
subnets = var.public_subnets
default_tags = merge(map(
"Name", "scratch-test-subnet",
"subnet_tag", "public"
), var.default_tags)
}
# tf-lib/network/vpc/main.tf
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = merge(map(
"module", "tf-lib/network/vpc"
), var.default_tags)
}
The variable.tf file for each module contains the statement:
variable "default_tags" {}
The merge function sets precedence by overwriting with the latter maps defined in the argument sequence:
merge takes an arbitrary number of maps or objects, and returns a single map or object that contains a merged set of elements from all arguments.
If more than one given map or object defines the same key or attribute, then the one that is later in the argument sequence takes precedence. If the argument types do not match, the resulting type will be an object matching the type structure of the attributes after the merging rules have been applied.
So to allow for you to override the default tags you can specify the default tags first like this:
# root variables.tf
variable "default_tags" {
type = map
default = {
environment_type = "Dev Environment"
}
}
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(var.default_tags, map(
"Name", "scratch-test"
))
}
The above might also look clearer using the {} map syntax rather than the map function too:
# root main.tf
provider "aws" {
region = var.region
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
default_tags = merge(var.default_tags, {
Name = "scratch-test",
})
}
As mentioned in the map function documentation this function is deprecated and will eventually be removed:
This function is deprecated. From Terraform v0.12, the Terraform
language has built-in syntax for creating maps using the { and }
delimiters. Use the built-in syntax instead. The map function will be
removed in a future version of Terraform.
As of AWS Provider v3.38.0, this can be done more simply. See this announcement blog. Here is your request implemented with the new provider and no tags variables.
# root main.tf
provider "aws" {
region = var.region
default_tags {
tags = {
environment_type = "Dev Environment"
}
}
}
module "vpc" {
source = "../../../../scratch/tags-pattern"
vpc_name = "scratch-test"
public_subnets = ["10.0.0.0/26"]
}
# ../../../../scratch/tags-pattern/main.tf
module "the_vpc" {
source = "../../terraform/tf-lib/network/vpc"
vpc_name = var.vpc_name
vpc_cidr = "10.0.0.0/24"
}
# tf-lib/network/vpc/main.tf
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "scratch-test-vpc"
module = "tf-lib/network/vpc"
vpc_tag = "vpc"
}
}
Here is the main documentation for default_tags.