I'm provisioning multiple k8s clusters using terraform.
On each cluster, I want to create namespaces.
My first attempt didn't work, the resources stayed in status "still creating" forever.
Then I tried to create multiple kubernetes providers.
I'm now facing a problem because they don't have the "count" field.
Indeed I'm creating conditional rules for deciding when to create (or not clusters).
See the logic below:
cluster_europe.tf
resource "azurerm_kubernetes_cluster" "k8sProjectNE" {
count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
name = var.clustername_ne
location = local.rg.location
resource_group_name = local.rg.name
dns_prefix = var.clustername_ne
clusterusa.tf
resource "azurerm_kubernetes_cluster" "k8sProjectUSA" {
count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
name = var.clustername_usa
location = local.rg.location
resource_group_name = local.rg.name
dns_prefix = var.clustername_usa
The problem happened when creating namespaces.
resource "kubernetes_namespace" "NE-staging" {
count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
metadata {
labels = {
mylabel = "staging"
}
name = "staging"
}
depends_on = [azurerm_kubernetes_cluster.k8sProjectNE]
provider = kubernetes.k8sProjectEU
}
resource "kubernetes_namespace" "USA-staging" {
count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
metadata {
labels = {
mylabel = "staging"
}
name = "staging"
}
depends_on = [azurerm_kubernetes_cluster.k8sProjectUSA]
provider = kubernetes.k8sProjectUSA
}
I define 2 kubernetes providers in main.tf and set provider = kubernetes.xxx in the resources that I want to create.
main.tf
provider "kubernetes" {
#count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
host = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.host
username = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.username
password = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.password
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.cluster_ca_certificate)}"
alias = "k8sProjectUSA"
}
provider "kubernetes" {
#count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
host = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host
username = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.username
password = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.password
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.cluster_ca_certificate)}"
alias = "k8sProjectEU"
}
Problem:
The kubernetes provider does not support the count field and it breaks my conditional creation rule.
Error: Invalid index
│
│ on main.tf line 32, in provider "kubernetes":
│ 32: host = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host
│ ├────────────────
│ │ azurerm_kubernetes_cluster.k8sProjectNE is empty tuple
│
│ The given key does not identify an element in this collection value: the collection has no elements.
╵
I'm close to a solution, but it is failing at this final step.
Above was an attempt to find a solution, but the real need is the following:
I want to conditionnally create clusters ( Infra in USA and/or Infra in Europe and / or DRP infra). This is the reason why we need conditional rules.
Then, on each created clusters we have to be able to create resource (namespace is an example but we also have secrets, etc...).
If I don't define multiple providers, it is not able to connect to the right clusters and generate this kind of error:
Implemented a solution like this:
provider "kubernetes" {
host = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host,"")
username = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.username,"")
password = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.password,"")
client_certificate = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_certificate)}","")
client_key = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_key)}","")
cluster_ca_certificate = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.cluster_ca_certificate)}","")
alias = "k8sProjectEU"
}
Related
I am unable to create an RDS due to failure in creating a subnet. I have different modules that I use to create an AWS infrastructure.
The main ones that i am having trouble with is RDS an VPC, where in the first one i create the database:
rds/main.tf
resource "aws_db_parameter_group" "education" {
name = "education"
family = "postgres14"
parameter {
name = "log_connections"
value = "1"
}
}
resource "aws_db_instance" "education" {
identifier = "education"
instance_class = "db.t3.micro"
allocated_storage = 5
engine = "postgres"
engine_version = "14.1"
username = "edu"
password = var.db_password
db_subnet_group_name = var.database_subnets
vpc_security_group_ids = var.rds_service_security_groups
parameter_group_name = aws_db_parameter_group.education.name
publicly_accessible = false
skip_final_snapshot = true
}
rds/variables.tf
variable "db_username" {
description = "RDS root username"
default = "someusername"
}
variable "db_password" {
description = "RDS root user password"
sensitive = true
}
variable "vpc_id" {
description = "VPC ID"
}
variable "rds_service_security_groups" {
description = "Comma separated list of security groups"
}
variable "database_subnets" {
description = "List of private subnets"
}
And the latter where i create the subnets and etc.
vpc/main.tf
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.private_subnets)
tags = {
Name = "${var.name}-private-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.public_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.public_subnets)
map_public_ip_on_launch = true
tags = {
Name = "${var.name}-public-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "database" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.database_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.database_subnets)
tags = {
Name = "Education"
Environment = var.environment
}
}
vpc/variables.tf
variable "name" {
description = "the name of the stack"
}
variable "environment" {
description = "the name of the environment "
}
variable "cidr" {
description = "The CIDR block for the VPC."
}
variable "public_subnets" {
description = "List of public subnets"
}
variable "private_subnets" {
description = "List of private subnets"
}
variable "database_subnets" {
description = "Database subnetes"
}
variable "availability_zones" {
description = "List of availability zones"
}
Then in the root directory i have a main.tf file where i create everything. In there i call the rds module
main.tf
module "rds" {
source = "./rds"
vpc_id = module.vpc.id
database_subnets = module.vpc.database_subnets
rds_service_security_groups = [module.security_groups.rds]
db_password = var.db_password
}
The error that i keep getting is this
Error: Incorrect attribute value type
│
│ on rds\\main.tf line 19, in resource "aws_db_instance" "education":
│ 19: db_subnet_group_name = var.database_subnets
│ ├────────────────
│ │ var.database_subnets is tuple with 2 elements
│
│ Inappropriate value for attribute "db_subnet_group_name": string required.
Any idea how i can fix it?
You are trying to pass a list of DB Subnets into a parameter that takes a DB Subnet Group name.
You need to modify your RDS module to create a DB Subnet Group with the given subnet IDs, and then pass that group name to the instance:
resource "aws_db_subnet_group" "education" {
name = "education"
subnet_ids = var.database_subnets
}
resource "aws_db_instance" "education" {
identifier = "education"
db_subnet_group_name = aws_db_subnet_group.education.name
...
}
I have build a kubernetes cluster on google cloud and I am now trying to use the kubernetes_secret resource to create a secret. Here is my configuration:
resource "google_service_account" "default" {
account_id = "gke-service-account"
display_name = "GKE Service Account"
}
resource "google_container_cluster" "cluster" {
name = "${var.cluster-name}-${terraform.workspace}"
location = var.region
initial_node_count = 1
project = var.project-id
remove_default_node_pool = true
}
resource "google_container_node_pool" "cluster_node_pool" {
name = "${var.cluster-name}-${terraform.workspace}-node-pool"
location = var.region
cluster = google_container_cluster.cluster.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-medium"
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.cluster.endpoint}"
client_certificate = base64decode(google_container_cluster.cluster.master_auth.0.client_certificate)
client_key = base64decode(google_container_cluster.cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
}
resource "kubernetes_secret" "cloudsql-credentials" {
metadata {
name = "database-credentials" # The name of the secret
}
data = {
connection-name = var.database-connection-name
username = var.database-user
password = var.database-password
}
type = "kubernetes.io/basic-auth"
However I get the following error when creating the kubernetes_secret resource:
╷
│ Error: secrets is forbidden: User "system:anonymous" cannot create resource "secrets" in API group "" in the namespace "default"
│
│ with module.kubernetes-cluster.kubernetes_secret.cloudsql-credentials,
│ on gke/main.tf line 58, in resource "kubernetes_secret" "cloudsql-credentials":
│ 58: resource "kubernetes_secret" "cloudsql-credentials" {
│
╵
What am I missing here? I really don't understand. In the documentation I have found the following that could maybe help:
Depending on whether you have a current context set this may require `config_context_auth_info` and/or `config_context_cluster` and/or `config_context`
But it is not clear at all how this should be set and there are no examples provided. Any help will be appreciated. Thank you.
well, I'm new in terraforming and also in Kubernetes, I faced an issue in deploying images after creating a Kubernetes cluster.
I have created a module that creates a Kubernetes cluster and provides output for the Kube config data.
now I'm using the code below but i need to run terraform apply 2 times because first time the local file is not created and terraform could not connect to kubernetes or helm or kubectl but if i run the command twice it works as expected.
any solution?
Note: i also applied the solution as on the comment section of the
code and that comment did now work either.
File : main.tf
module "deploy_lke" {
source = "./modules/linode/kubernetes"
token = var.token
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
instance_type = var.instance_type
number_of_instance = var.number_of_instance
min = var.min
max = var.max
}
module "deploy_image" {
source = "./modules/kubernetes"
kube_config_path = module.deploy_lke.kubeconfig
dockerconfigjson = file("./secret/docker-sec.json")
deploy_name = var.deploy_name
desire_replicas = var.desire_replicas
image_link = var.image_link
image_name = var.image_name
image_port = var.image_port
ip_type = var.ip_type
max_replicas_val = var.max_replicas_val
min_replicas_val = var.min_replicas_val
service_name = var.service_name
}
File : ./module/linode/kubernetes
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.29.4"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "linode" {
token = var.token
}
resource "linode_lke_cluster" "gaintplay-web-lke" {
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
pool {
type = var.instance_type
count = var.number_of_instance
autoscaler {
min = var.min
max = var.max
}
}
lifecycle {
ignore_changes = [
pool.0.count
]
}
}
output "kubeconfig" {
value = linode_lke_cluster.gaintplay-web-lke.kubeconfig
}
output "api_endpoints" {
value = linode_lke_cluster.gaintplay-web-lke.api_endpoints
}
File : ./module/kubernetes
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
resource "local_file" "kube_config_file" {
content = var.kube_config_path
filename = "${path.module}/config"
}
provider "kubernetes" {
config_path = var.kube_config_path
}
provider "helm" {
kubernetes {
config_path = var.kube_config_path
}
}
resource "kubernetes_secret" "docker_secret" {
metadata {
name = "docker-cfg"
}
data = {
".dockerconfigjson" = var.dockerconfigjson
}
type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_deployment" "beta" {
depends_on = [
kubernetes_secret.docker_secret
]
metadata {
name = var.deploy_name
namespace = "default"
}
spec {
replicas = var.desire_replicas
selector {
match_labels = {
app = var.deploy_name
}
}
template {
metadata {
labels = {
app = var.deploy_name
}
}
spec {
image_pull_secrets {
name = kubernetes_secret.docker_secret.metadata[0].name
}
container {
image_pull_policy = "Always"
image = var.image_link
name = var.image_name
port {
container_port = var.image_port
}
}
}
}
}
}
# provider "kubernetes" {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# provider "helm" {
# kubernetes {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# }
If i use the command as it is i got this error in terraform plan that the file is not found and i need to run it twice.
Invalid attribute in provider configuration
with module.deploy_image.provider["registry.terraform.io/hashicorp/kubernetes"],
on modules/kubernetes/main.tf line 13, in provider "kubernetes":
13: provider "kubernetes" {
'config_path' refers to an invalid path: "modules/kubernetes/config": stat modules/kubernetes/config: no such file or directory
and
If I use commented code i get error like this:
│ Error: Unsupported attribute
│
│ on main.tf line 35, in provider "kubernetes":
│ 35: host = "${yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).clusters.0.cluster.server}"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 36, in provider "kubernetes":
│ 36: client_certificate = "${base64decode(yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).users.0.user.client-certificate-data)}"
│
│ Can't access attributes on a primitive-typed value (string).
I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?
Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name
I have been looking at implementing Kubernetes with Terraform over the past week and I seem to have a lifecycle issue.
While I can make a Kubernetes resource depend on a cluster being spun up, the KUBECONFIG file isn't updated in the middle of the terraform apply.
The kubernete
resource "kubernetes_service" "example" {
...
depends_on = ["digitalocean_kubernetes_cluster.example"]
}
resource "digitalocean_kubernetes_cluster" "example" {
name = "example"
region = "${var.region}"
version = "1.12.1-do.2"
node_pool {
name = "woker-pool"
size = "s-1vcpu-2gb"
node_count = 1
}
provisioner "local-exec" {
command = "sh ./get-kubeconfig.sh" // gets KUBECONFIG file from digitalocean API.
environment = {
digitalocean_kubernetes_cluster_id = "${digitalocean_kubernetes_cluster.k8s.id}"
digitalocean_kubernetes_cluster_name = "${digitalocean_kubernetes_cluster.k8s.name}"
digitalocean_api_token = "${var.digitalocean_token}"
}
}
While I can pull the CONFIG file down using the API, terraform won't use this file, because the terraform plan is already in motion
I've seen some examples using ternary operators (resource ? 1 : 0) but I haven't found a workaround for non count created clusters besides -target
Ideally, I'd like to create this with one terraform repo.
It turns out that the digitalocean_kubernetes_cluster resource has an attribute which can be passed to the provider "kubernetes" {} like so:
resource "digitalocean_kubernetes_cluster" "k8s" {
name = "k8s"
region = "${var.region}"
version = "1.12.1-do.2"
node_pool {
name = "woker-pool"
size = "s-1vcpu-2gb"
node_count = 1
}
}
provider "kubernetes" {
host = "${digitalocean_kubernetes_cluster.k8s.endpoint}"
client_certificate = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.client_certificate)}"
client_key = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(digitalocean_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)}"
}
It results in one provider being dependant on the other, and acts accordingly.