I'm trying to create multiple K8s manifests based on VPC subnets as the following code suggests:
resource "aws_subnet" "pod_subnets" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
depends_on = [
aws_vpc_ipv4_cidr_block_association.pod_cidr
]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
availability_zone = each.key
cidr_block = each.value
tags = merge(
local.common_tags,
{
"Name" = format(
"${var.environment_name}-pods-network-%s",
each.key)
} )
}
resource "kubernetes_manifest" "ENIconfig" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
manifest = {
"apiVersion" = "crd.k8s.amazonaws.com/v1alpha1"
"kind" = "ENIConfig"
"metadata" = {
"name" = each.key
}
"spec" = {
"securityGroups" = [
aws_security_group.worker_node.id,
]
"subnet" = aws_subnet.pod_subnets[each.key].id
}
}
}
However, when I'm running Terraform I'm getting the following error:
Provider "registry.terraform.io/hashicorp/kubernetes" planned an invalid value for kubernetes_manifest.ENIconfig["eu-west-3a"].manifest: planned value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"), "kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.NullVal(cty.String)})}) does not match config value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"),"kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.UnknownVal(cty.String)})}).
Any idea what I'm doing wrong here?
Turns out that kubernetes_manifest cannot be rendered with values that have not been created first. Only static values that can populate the resource are working.
Related
My requirement is to create or delete a resource by specifying enable flag true or false (In case of false the resource should get deleted, in case of true the resource should get created)
Kindly refer below code - here I am creating "confluent topic" resource and calling it dynamically using for_each condition.
Confluent Topic creation
topic.tf file:
resource "confluent_kafka_topic" "topic" {
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
for_each = { for t in var.topic_name : t.topic_name => t }
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Variable declared as:
variable "topic_name" {
type = list(map(string))
default = [{
"topic_name" = "default_topic"
}]
}
And finally executing it through DEV.tfvars file:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
},
{
topic_name = "json-topic-1"
partitions_count = "8"
},
]
The above code execution works fine and I am able to create and delete multiple resources. I want to modify it further and add a flag/toggle to create or delete a resource.
Example as shown below:
topic_name = [
{
topic_name = "avro-topic-1"
partitions_count = "6"
**enable = true #this flag will create the resource**
},
{
topic_name = "json-topic-1"
partitions_count = "8"
**enable = false #this flag will delete the resource**
},
]
Kindly help suggest how it can be achieved and if there is any different approach to follow.
As mentioned in my comment, I think this can be achieved with the following change:
resource "confluent_kafka_topic" "topic" {
for_each = { for t in var.topic_name : t.topic_name => t if t.enable }
kafka_cluster {
id = confluent_kafka_cluster.dedicated.id
}
topic_name = each.value["topic_name"]
partitions_count = each.value["partitions_count"]
rest_endpoint = confluent_kafka_cluster.dedicated.rest_endpoint
credentials {
key = confluent_api_key.app-manager-kafka-api-key.id
secret = confluent_api_key.app-manager-kafka-api-key.secret
}
}
Additionally, for_each should probably be at the top of the resource block to make sure it is visible immediately to the reader. The if t.enable part will make sure that for_each will create a resource only when the variable key has the enabled = true.
I have this Terraform GKE cluster with 3 nodes. When I deploy this cluster all nodes are localised in the same zones which is europe-west1-b.
gke-cluster.yml
resource "google_container_cluster" "primary" {
name = var.cluster_name
initial_node_count = var.initial_node_count
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
//machine_type = "e2-medium"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = var.app_name
}
tags = ["app", var.app_name]
}
timeouts {
create = "30m"
update = "40m"
}
}
variables.tf
variable "cluster_name" {
default = "cluster"
}
variable "app_name" {
default = "my-app"
}
variable "initial_node_count" {
default = 3
}
variable "kubernetes_min_ver" {
default = "latest"
}
variable "kubernetes_max_ver" {
default = "latest"
}
variable "remove_default_node_pool" {
default = false
}
variable "project" {
default = "your-project-name"
}
variable "credentials" {
default = "terraform-key.json"
}
variable "region" {
default = "europe-west1"
}
variable "zone" {
type = list(string)
description = "The zones to host the cluster in."
default = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
}
And would like to know if it's possible to deploy each node in a different zone.
If yes how can I do it using Terraform?
Simply add the following line
resource "google_container_cluster" "primary" {
name = "cluster"
location = "us-central1"
initial_node_count = "3"
in order to create a regional cluster. The above will bring up 9 nodes with each zone (f a b) containing 3 nodes. If you only want 1 node per zone, then just change initial_node_count to 1.
More info here at Argument reference.
I have two public Subnets declared in my VPC and now I want to create an EC2 instance in each of the two public subnets, but Terraform doesn't properly resolve the subnet ids
Here is what I have defined:
resource "aws_subnet" "archer-public-1" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ1}"
}
resource "aws_subnet" "archer-public-2" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.2.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ2}"
}
Here is my EC2 resource definition with the subnet expression that I tried unsuccessfully.
resource "aws_instance" "nginx" {
count = 2
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
subnet_id = "aws_subnet.archer-public-${count.index+1}.id" <== why doesn't this work?!
}
The variable interpolation does produce the proper values for the two subnets: archer-public-1 and archer-public-2, yet, the terraform produces these errors:
Error: Error launching source instance: InvalidSubnetID.NotFound: The subnet ID 'aws_subnet.archer-public-1.id' does not exist
status code: 400, request id: 26b4f710-e968-484d-a17a-6faa5a9d15d5
Yet when I invoke the terraform console, I can see that it properly resolves these objects as expected:
> aws_subnet.archer-public-1
{
"arn" = "arn:aws:ec2:us-west-2:361879417564:subnet/subnet-0fb47d0d30f501585"
"assign_ipv6_address_on_creation" = false
"availability_zone" = "us-west-2a"
"availability_zone_id" = "usw2-az1"
"cidr_block" = "10.0.1.0/24"
"id" = "subnet-0fb47d0d30f501585"
"ipv6_cidr_block" = ""
"ipv6_cidr_block_association_id" = ""
"map_public_ip_on_launch" = true
"outpost_arn" = ""
"owner_id" = "361879417564"
"tags" = {
"Name" = "archer-public-1"
}
"vpc_id" = "vpc-074637b06747e227b"
}
Our hashicorp vault deployment on k8s (on premise) seem to seal itself after few days. I am unable to find a way to keep it always unsealed so that applications which are using it do not fail.
Activate auto-unsealing. Here's my GCP example, in Terraform (I am running Hashicorp Vault on Kubernetes).
resource "google_service_account" "hashicorp_vault" {
project = var.project
account_id = "hashicorp-vault"
display_name = "Hashicorp Vault Service Account"
}
resource "google_service_account_iam_member" "hashicorp_vault_iam_workload_identity_user_member" {
service_account_id = google_service_account.hashicorp_vault.name
role = "roles/iam.workloadIdentityUser"
member = "serviceAccount:${var.project}.svc.id.goog[${helm_release.hashicorp_vault.namespace}/hashicorp-vault]"
}
resource "google_project_iam_custom_role" "hashicorp_vault_role" {
project = var.project
role_id = "hashicorp_vault"
title = "Hashicorp Vault"
permissions = [
"cloudkms.cryptoKeyVersions.useToEncrypt",
"cloudkms.cryptoKeyVersions.useToDecrypt",
"cloudkms.cryptoKeys.get",
]
}
resource "google_project_iam_member" "cicd_bot_role_member" {
project = var.project
role = google_project_iam_custom_role.hashicorp_vault_role.name
member = "serviceAccount:${google_service_account.hashicorp_vault.email}"
}
resource "google_kms_key_ring" "hashicorp_vault" {
project = var.project
location = var.region
name = "hashicorp-vault"
}
resource "google_kms_crypto_key" "hashicorp_vault_recovery_key" {
name = "hashicorp-vault-recovery-key"
key_ring = google_kms_key_ring.hashicorp_vault.id
lifecycle {
prevent_destroy = true
}
}
resource "helm_release" "hashicorp_vault" {
name = "hashicorp-vault"
repository = "https://helm.releases.hashicorp.com"
chart = "vault"
version = var.hashicorp_vault_version
namespace = "hashicorp-vault"
create_namespace = true
set {
name = "server.extraEnvironmentVars.VAULT_SEAL_TYPE"
value = "gcpckms"
}
set {
name = "server.extraEnvironmentVars.GOOGLE_PROJECT"
value = var.project
}
set {
name = "server.extraEnvironmentVars.GOOGLE_REGION"
value = var.region
}
set {
name = "server.extraEnvironmentVars.VAULT_GCPCKMS_SEAL_KEY_RING"
value = google_kms_key_ring.hashicorp_vault.name
}
set {
name = "server.extraEnvironmentVars.VAULT_GCPCKMS_SEAL_CRYPTO_KEY"
value = google_kms_crypto_key.hashicorp_vault_recovery_key.name
}
set {
name = "server.serviceaccount.annotations.iam\\.gke\\.io/gcp-service-account"
value = google_service_account.hashicorp_vault.email
}
}
After doing this, I noticed that my Hashicorp Vault pod was in error state, so I deleted it so it could pick up the new environment variables. Then, it came online with a message indicating that it was ready for "migration" to the new unsealing strategy.
Then, use the operator to migrate to the new sealing strategy:
vault operator unseal -migrate
I want to create Kubernetes cluster with Terraform,
Regarding the doc page here: https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
Where do I insert vswitch id? and how to set the region id?
You can insert the vswitch id in the resource definition:
resource "alicloud_cs_managed_kubernetes" "k8s" {
name = "${var.name}"
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
new_nat_gateway = true
worker_instance_types = ["${data.alicloud_instance_types.default.instance_types.0.id}"]
worker_numbers = [2]
password = "Test12345"
pod_cidr = "172.20.0.0/16"
service_cidr = "172.21.0.0/20"
install_cloud_monitor = true
worker_disk_category = "cloud_efficiency"
vswitch_ids = ["your-alibaba-vswitch-id"]
}
For the zones (if you want to override the defaults) based on this and the docs, you need to do something like this:
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
zones = [
{
id = "..."
local_name = "..."
...
},
{
id = "..."
local_name = "..."
...
},
...
]
}
To set region:
While configuring Alicloud provider in Terraform itself you can set the region:
provider "alicloud" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
For instance, let me consider Beijing as the region:
provider "alicloud" {
access_key = "accesskey"
secret_key = "secretkey"
region = "cn-beijing"
}
To set vswitch IDs:
while defining the resource section we can insert the desired vswitches
resource "alicloud_instance"{
# ...
instance_name = "in-the-vpc"
vswitch_id = "${data.alicloud_vswitches.vswitches_ds.vswitches.0.id}"
# ...
}
For instance, let me consider vsw-25naue4gz as the vswitch id:
resource "alicloud_instance"{
# ...
vswitch_id = "vsw-25naue4gz"
# ...
}