Terraform .12 InvalidSubnetID.NotFound - terraform0.12+

I have two public Subnets declared in my VPC and now I want to create an EC2 instance in each of the two public subnets, but Terraform doesn't properly resolve the subnet ids
Here is what I have defined:
resource "aws_subnet" "archer-public-1" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ1}"
}
resource "aws_subnet" "archer-public-2" {
vpc_id = aws_vpc.archer.id
cidr_block = "10.0.2.0/24"
map_public_ip_on_launch = "true"
availability_zone = "${var.AZ2}"
}
Here is my EC2 resource definition with the subnet expression that I tried unsuccessfully.
resource "aws_instance" "nginx" {
count = 2
ami = var.AMIS[var.AWS_REGION]
instance_type = "t2.micro"
subnet_id = "aws_subnet.archer-public-${count.index+1}.id" <== why doesn't this work?!
}
The variable interpolation does produce the proper values for the two subnets: archer-public-1 and archer-public-2, yet, the terraform produces these errors:
Error: Error launching source instance: InvalidSubnetID.NotFound: The subnet ID 'aws_subnet.archer-public-1.id' does not exist
status code: 400, request id: 26b4f710-e968-484d-a17a-6faa5a9d15d5
Yet when I invoke the terraform console, I can see that it properly resolves these objects as expected:
> aws_subnet.archer-public-1
{
"arn" = "arn:aws:ec2:us-west-2:361879417564:subnet/subnet-0fb47d0d30f501585"
"assign_ipv6_address_on_creation" = false
"availability_zone" = "us-west-2a"
"availability_zone_id" = "usw2-az1"
"cidr_block" = "10.0.1.0/24"
"id" = "subnet-0fb47d0d30f501585"
"ipv6_cidr_block" = ""
"ipv6_cidr_block_association_id" = ""
"map_public_ip_on_launch" = true
"outpost_arn" = ""
"owner_id" = "361879417564"
"tags" = {
"Name" = "archer-public-1"
}
"vpc_id" = "vpc-074637b06747e227b"
}

Related

Cannot provide RDS subnet through different terraform modules

I am unable to create an RDS due to failure in creating a subnet. I have different modules that I use to create an AWS infrastructure.
The main ones that i am having trouble with is RDS an VPC, where in the first one i create the database:
rds/main.tf
resource "aws_db_parameter_group" "education" {
name = "education"
family = "postgres14"
parameter {
name = "log_connections"
value = "1"
}
}
resource "aws_db_instance" "education" {
identifier = "education"
instance_class = "db.t3.micro"
allocated_storage = 5
engine = "postgres"
engine_version = "14.1"
username = "edu"
password = var.db_password
db_subnet_group_name = var.database_subnets
vpc_security_group_ids = var.rds_service_security_groups
parameter_group_name = aws_db_parameter_group.education.name
publicly_accessible = false
skip_final_snapshot = true
}
rds/variables.tf
variable "db_username" {
description = "RDS root username"
default = "someusername"
}
variable "db_password" {
description = "RDS root user password"
sensitive = true
}
variable "vpc_id" {
description = "VPC ID"
}
variable "rds_service_security_groups" {
description = "Comma separated list of security groups"
}
variable "database_subnets" {
description = "List of private subnets"
}
And the latter where i create the subnets and etc.
vpc/main.tf
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.private_subnets)
tags = {
Name = "${var.name}-private-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.public_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.public_subnets)
map_public_ip_on_launch = true
tags = {
Name = "${var.name}-public-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "database" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.database_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.database_subnets)
tags = {
Name = "Education"
Environment = var.environment
}
}
vpc/variables.tf
variable "name" {
description = "the name of the stack"
}
variable "environment" {
description = "the name of the environment "
}
variable "cidr" {
description = "The CIDR block for the VPC."
}
variable "public_subnets" {
description = "List of public subnets"
}
variable "private_subnets" {
description = "List of private subnets"
}
variable "database_subnets" {
description = "Database subnetes"
}
variable "availability_zones" {
description = "List of availability zones"
}
Then in the root directory i have a main.tf file where i create everything. In there i call the rds module
main.tf
module "rds" {
source = "./rds"
vpc_id = module.vpc.id
database_subnets = module.vpc.database_subnets
rds_service_security_groups = [module.security_groups.rds]
db_password = var.db_password
}
The error that i keep getting is this
Error: Incorrect attribute value type
│
│ on rds\\main.tf line 19, in resource "aws_db_instance" "education":
│ 19: db_subnet_group_name = var.database_subnets
│ ├────────────────
│ │ var.database_subnets is tuple with 2 elements
│
│ Inappropriate value for attribute "db_subnet_group_name": string required.
Any idea how i can fix it?
You are trying to pass a list of DB Subnets into a parameter that takes a DB Subnet Group name.
You need to modify your RDS module to create a DB Subnet Group with the given subnet IDs, and then pass that group name to the instance:
resource "aws_db_subnet_group" "education" {
name = "education"
subnet_ids = var.database_subnets
}
resource "aws_db_instance" "education" {
identifier = "education"
db_subnet_group_name = aws_db_subnet_group.education.name
...
}

Dynamic creation of kubernetes manifest in Terraform

I'm trying to create multiple K8s manifests based on VPC subnets as the following code suggests:
resource "aws_subnet" "pod_subnets" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
depends_on = [
aws_vpc_ipv4_cidr_block_association.pod_cidr
]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
availability_zone = each.key
cidr_block = each.value
tags = merge(
local.common_tags,
{
"Name" = format(
"${var.environment_name}-pods-network-%s",
each.key)
} )
}
resource "kubernetes_manifest" "ENIconfig" {
for_each = module.pods_subnet_addrs.network_cidr_blocks
manifest = {
"apiVersion" = "crd.k8s.amazonaws.com/v1alpha1"
"kind" = "ENIConfig"
"metadata" = {
"name" = each.key
}
"spec" = {
"securityGroups" = [
aws_security_group.worker_node.id,
]
"subnet" = aws_subnet.pod_subnets[each.key].id
}
}
}
However, when I'm running Terraform I'm getting the following error:
Provider "registry.terraform.io/hashicorp/kubernetes" planned an invalid value for kubernetes_manifest.ENIconfig["eu-west-3a"].manifest: planned value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"), "kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.NullVal(cty.String)})}) does not match config value cty.ObjectVal(map[string]cty.Value{"apiVersion":cty.StringVal("crd.k8s.amazonaws.com/v1alpha1"),"kind":cty.StringVal("ENIConfig"),"metadata":cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("eu-west-3a")}), "spec":cty.ObjectVal(map[string]cty.Value{"securityGroups":cty.TupleVal([]cty.Value{cty.StringVal("sg-07e264400925e9a4a")}),"subnet":cty.UnknownVal(cty.String)})}).
Any idea what I'm doing wrong here?
Turns out that kubernetes_manifest cannot be rendered with values that have not been created first. Only static values that can populate the resource are working.

Helm - Kubernetes cluster unreachable: the server has asked for the client to provide credentials

I'm trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource "helm_release" "nginx":
resource "helm_release" "nginx" {
This error repeats for metrics_server, lb_ingress, argocd, but cluster-autoscaler throws:
Warning: Helm release "cluster-autoscaler" was created but has a failed status.
with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0]
on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource "helm_release" "cluster_autoscaler":
resource "helm_release" "cluster_autoscaler" {
My main.tf looks like this:
terraform {
backend "remote" {}
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.66.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.7.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks-ssp.eks_cluster_id
}
provider "aws" {
access_key = "xxx"
secret_key = "xxx"
region = "xxx"
assume_role {
role_arn = "xxx"
}
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
}
}
My eks.tf looks like this:
module "eks-ssp" {
source = "github.com/aws-samples/aws-eks-accelerator-for-terraform"
# EKS CLUSTER
tenant = "DevOpsLabs2b"
environment = "dev-test"
zone = ""
terraform_version = "Terraform v1.1.4"
# EKS Cluster VPC and Subnet mandatory config
vpc_id = "xxx"
private_subnet_ids = ["xxx","xxx", "xxx", "xxx"]
# EKS CONTROL PLANE VARIABLES
create_eks = true
kubernetes_version = "1.19"
# EKS SELF MANAGED NODE GROUPS
self_managed_node_groups = {
self_mg = {
node_group_name = "DevOpsLabs2b"
subnet_ids = ["xxx","xxx", "xxx", "xxx"]
create_launch_template = true
launch_template_os = "bottlerocket" # amazonlinux2eks or bottlerocket or windows
custom_ami_id = "xxx"
public_ip = true # Enable only for public subnets
pre_userdata = <<-EOT
yum install -y amazon-ssm-agent \
systemctl enable amazon-ssm-agent && systemctl start amazon-ssm-agent \
EOT
disk_size = 10
instance_type = "t2.small"
desired_size = 2
max_size = 10
min_size = 0
capacity_type = "" # Optional Use this only for SPOT capacity as capacity_type = "spot"
k8s_labels = {
Environment = "dev-test"
Zone = ""
WorkerType = "SELF_MANAGED_ON_DEMAND"
}
additional_tags = {
ExtraTag = "t2x-on-demand"
Name = "t2x-on-demand"
subnet_type = "public"
}
create_worker_security_group = false # Creates a dedicated sec group for this Node Group
},
}
}
enable_amazon_eks_vpc_cni = true
amazon_eks_vpc_cni_config = {
addon_name = "vpc-cni"
addon_version = "v1.7.5-eksbuild.2"
service_account = "aws-node"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
enable_amazon_eks_kube_proxy = true
amazon_eks_kube_proxy_config = {
addon_name = "kube-proxy"
addon_version = "v1.19.8-eksbuild.1"
service_account = "kube-proxy"
resolve_conflicts = "OVERWRITE"
namespace = "kube-system"
additional_iam_policies = []
service_account_role_arn = ""
tags = {}
}
#K8s Add-ons
enable_aws_load_balancer_controller = true
enable_metrics_server = true
enable_cluster_autoscaler = true
enable_aws_for_fluentbit = true
enable_argocd = true
enable_ingress_nginx = true
depends_on = [module.eks-ssp.self_managed_node_groups]
}
OP has confirmed in the comment that the problem was resolved:
Of course. I think I found the issue. Doing "kubectl get svc" throws: "An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy"
Solved it by using my actual role, that's crazy. No idea why it was calling itself.
For similar problem look also this issue.
I solved this error by adding dependencies in the helm installations.
The depends_on will wait for the step to successfully complete and then helm module runs.
module "nginx-ingress" {
depends_on = [module.eks, module.aws-load-balancer-controller]
source = "terraform-module/release/helm"
...}
module "aws-load-balancer-controller" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}
module "helm_autoscaler" {
depends_on = [module.eks]
source = "terraform-module/release/helm"
...}

How to create routing for kubernetes with nginx ingress in terraform - scaleway

I have a kubernetes setup with a cluster and two pools (nodes), I have also setup an (nginx) ingress server for kubernetes with helm. All of this is written in terraform for scaleway. What I am struggling with is how to config the ingress server to route to my kubernetes pools/nodes depending on the url path.
For example, I want [url]/api to go to my scaleway_k8s_pool.api and [url]/auth to go to my scaleway_k8s_pool.auth.
This is my terraform code
provider "scaleway" {
zone = "fr-par-1"
region = "fr-par"
}
resource "scaleway_registry_namespace" "main" {
name = "main_container_registry"
description = "Main container registry"
is_public = false
}
resource "scaleway_k8s_cluster" "main" {
name = "main"
description = "The main cluster"
version = "1.20.5"
cni = "calico"
tags = ["i'm an awsome tag"]
autoscaler_config {
disable_scale_down = false
scale_down_delay_after_add = "5m"
estimator = "binpacking"
expander = "random"
ignore_daemonsets_utilization = true
balance_similar_node_groups = true
expendable_pods_priority_cutoff = -5
}
}
resource "scaleway_k8s_pool" "api" {
cluster_id = scaleway_k8s_cluster.main.id
name = "api"
node_type = "DEV1-M"
size = 1
autoscaling = true
autohealing = true
min_size = 1
max_size = 5
}
resource "scaleway_k8s_pool" "auth" {
cluster_id = scaleway_k8s_cluster.main.id
name = "auth"
node_type = "DEV1-M"
size = 1
autoscaling = true
autohealing = true
min_size = 1
max_size = 5
}
resource "null_resource" "kubeconfig" {
depends_on = [scaleway_k8s_pool.api, scaleway_k8s_pool.auth] # at least one pool here
triggers = {
host = scaleway_k8s_cluster.main.kubeconfig[0].host
token = scaleway_k8s_cluster.main.kubeconfig[0].token
cluster_ca_certificate = scaleway_k8s_cluster.main.kubeconfig[0].cluster_ca_certificate
}
}
output "cluster_url" {
value = scaleway_k8s_cluster.main.apiserver_url
}
provider "helm" {
kubernetes {
host = null_resource.kubeconfig.triggers.host
token = null_resource.kubeconfig.triggers.token
cluster_ca_certificate = base64decode(
null_resource.kubeconfig.triggers.cluster_ca_certificate
)
}
}
resource "helm_release" "ingress" {
name = "ingress"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
namespace = "kube-system"
}
How would i go about configuring the nginx ingress server for routing to my kubernetes pools?

Alibaba Cloud Managed Kubernetes Terraform

I want to create Kubernetes cluster with Terraform,
Regarding the doc page here: https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html
variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
Where do I insert vswitch id? and how to set the region id?
You can insert the vswitch id in the resource definition:
resource "alicloud_cs_managed_kubernetes" "k8s" {
name = "${var.name}"
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
new_nat_gateway = true
worker_instance_types = ["${data.alicloud_instance_types.default.instance_types.0.id}"]
worker_numbers = [2]
password = "Test12345"
pod_cidr = "172.20.0.0/16"
service_cidr = "172.21.0.0/20"
install_cloud_monitor = true
worker_disk_category = "cloud_efficiency"
vswitch_ids = ["your-alibaba-vswitch-id"]
}
For the zones (if you want to override the defaults) based on this and the docs, you need to do something like this:
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
zones = [
{
id = "..."
local_name = "..."
...
},
{
id = "..."
local_name = "..."
...
},
...
]
}
To set region:
While configuring Alicloud provider in Terraform itself you can set the region:
provider "alicloud" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
For instance, let me consider Beijing as the region:
provider "alicloud" {
access_key = "accesskey"
secret_key = "secretkey"
region = "cn-beijing"
}
To set vswitch IDs:
while defining the resource section we can insert the desired vswitches
resource "alicloud_instance"{
# ...
instance_name = "in-the-vpc"
vswitch_id = "${data.alicloud_vswitches.vswitches_ds.vswitches.0.id}"
# ...
}
For instance, let me consider vsw-25naue4gz as the vswitch id:
resource "alicloud_instance"{
# ...
vswitch_id = "vsw-25naue4gz"
# ...
}