Terraform: module outputs not being recognised as variables - kubernetes

I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.
My main.tf call just two modules, gke for the google kubernetes engine and storage which creates a persistent volume on the cluster created previously.
Module gke has an outputs.tf which outputs the following:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
Then in the main.tf for the storage module, I have:
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
Then in the root main.tf I have the following:
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
From what I see, it looks right. The values for the certs, key and host variables should be outputted from the gke module by outputs.tf, picked up by main.tf of root, and then delivered to storage as a regular variable.
Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.
I get questioned about the variable not being filled when I run a plan.
EDIT:
Adding some additional information including my code.
If I manually add dummy entries for the variables it's asking for I get the following error:
Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.
Code below:
Folder structure:
root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
root-folder/gke/main.tf:
provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
root-folder/gke/outputs.tf:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
root-folder/gke/variables.tf:
variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
/root-folder/storage/main.cf:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
/root/storage/variables.tf:
variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
/root-folder/main.tf:
module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
/root-folder/variables.tf:
variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
I won't paste the contents of my staging.json and terraform.tfvars for obvious reasons :)

In your /root-folder/variables.tf, delete the following entries:
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.

Related

Cannot provide RDS subnet through different terraform modules

I am unable to create an RDS due to failure in creating a subnet. I have different modules that I use to create an AWS infrastructure.
The main ones that i am having trouble with is RDS an VPC, where in the first one i create the database:
rds/main.tf
resource "aws_db_parameter_group" "education" {
name = "education"
family = "postgres14"
parameter {
name = "log_connections"
value = "1"
}
}
resource "aws_db_instance" "education" {
identifier = "education"
instance_class = "db.t3.micro"
allocated_storage = 5
engine = "postgres"
engine_version = "14.1"
username = "edu"
password = var.db_password
db_subnet_group_name = var.database_subnets
vpc_security_group_ids = var.rds_service_security_groups
parameter_group_name = aws_db_parameter_group.education.name
publicly_accessible = false
skip_final_snapshot = true
}
rds/variables.tf
variable "db_username" {
description = "RDS root username"
default = "someusername"
}
variable "db_password" {
description = "RDS root user password"
sensitive = true
}
variable "vpc_id" {
description = "VPC ID"
}
variable "rds_service_security_groups" {
description = "Comma separated list of security groups"
}
variable "database_subnets" {
description = "List of private subnets"
}
And the latter where i create the subnets and etc.
vpc/main.tf
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.private_subnets)
tags = {
Name = "${var.name}-private-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.public_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.public_subnets)
map_public_ip_on_launch = true
tags = {
Name = "${var.name}-public-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "database" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.database_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.database_subnets)
tags = {
Name = "Education"
Environment = var.environment
}
}
vpc/variables.tf
variable "name" {
description = "the name of the stack"
}
variable "environment" {
description = "the name of the environment "
}
variable "cidr" {
description = "The CIDR block for the VPC."
}
variable "public_subnets" {
description = "List of public subnets"
}
variable "private_subnets" {
description = "List of private subnets"
}
variable "database_subnets" {
description = "Database subnetes"
}
variable "availability_zones" {
description = "List of availability zones"
}
Then in the root directory i have a main.tf file where i create everything. In there i call the rds module
main.tf
module "rds" {
source = "./rds"
vpc_id = module.vpc.id
database_subnets = module.vpc.database_subnets
rds_service_security_groups = [module.security_groups.rds]
db_password = var.db_password
}
The error that i keep getting is this
Error: Incorrect attribute value type
│
│ on rds\\main.tf line 19, in resource "aws_db_instance" "education":
│ 19: db_subnet_group_name = var.database_subnets
│ ├────────────────
│ │ var.database_subnets is tuple with 2 elements
│
│ Inappropriate value for attribute "db_subnet_group_name": string required.
Any idea how i can fix it?
You are trying to pass a list of DB Subnets into a parameter that takes a DB Subnet Group name.
You need to modify your RDS module to create a DB Subnet Group with the given subnet IDs, and then pass that group name to the instance:
resource "aws_db_subnet_group" "education" {
name = "education"
subnet_ids = var.database_subnets
}
resource "aws_db_instance" "education" {
identifier = "education"
db_subnet_group_name = aws_db_subnet_group.education.name
...
}

how to use terraform output to deploy image inside kubernetes cluster in modular approch?

well, I'm new in terraforming and also in Kubernetes, I faced an issue in deploying images after creating a Kubernetes cluster.
I have created a module that creates a Kubernetes cluster and provides output for the Kube config data.
now I'm using the code below but i need to run terraform apply 2 times because first time the local file is not created and terraform could not connect to kubernetes or helm or kubectl but if i run the command twice it works as expected.
any solution?
Note: i also applied the solution as on the comment section of the
code and that comment did now work either.
File : main.tf
module "deploy_lke" {
source = "./modules/linode/kubernetes"
token = var.token
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
instance_type = var.instance_type
number_of_instance = var.number_of_instance
min = var.min
max = var.max
}
module "deploy_image" {
source = "./modules/kubernetes"
kube_config_path = module.deploy_lke.kubeconfig
dockerconfigjson = file("./secret/docker-sec.json")
deploy_name = var.deploy_name
desire_replicas = var.desire_replicas
image_link = var.image_link
image_name = var.image_name
image_port = var.image_port
ip_type = var.ip_type
max_replicas_val = var.max_replicas_val
min_replicas_val = var.min_replicas_val
service_name = var.service_name
}
File : ./module/linode/kubernetes
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.29.4"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "linode" {
token = var.token
}
resource "linode_lke_cluster" "gaintplay-web-lke" {
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
pool {
type = var.instance_type
count = var.number_of_instance
autoscaler {
min = var.min
max = var.max
}
}
lifecycle {
ignore_changes = [
pool.0.count
]
}
}
output "kubeconfig" {
value = linode_lke_cluster.gaintplay-web-lke.kubeconfig
}
output "api_endpoints" {
value = linode_lke_cluster.gaintplay-web-lke.api_endpoints
}
File : ./module/kubernetes
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
resource "local_file" "kube_config_file" {
content = var.kube_config_path
filename = "${path.module}/config"
}
provider "kubernetes" {
config_path = var.kube_config_path
}
provider "helm" {
kubernetes {
config_path = var.kube_config_path
}
}
resource "kubernetes_secret" "docker_secret" {
metadata {
name = "docker-cfg"
}
data = {
".dockerconfigjson" = var.dockerconfigjson
}
type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_deployment" "beta" {
depends_on = [
kubernetes_secret.docker_secret
]
metadata {
name = var.deploy_name
namespace = "default"
}
spec {
replicas = var.desire_replicas
selector {
match_labels = {
app = var.deploy_name
}
}
template {
metadata {
labels = {
app = var.deploy_name
}
}
spec {
image_pull_secrets {
name = kubernetes_secret.docker_secret.metadata[0].name
}
container {
image_pull_policy = "Always"
image = var.image_link
name = var.image_name
port {
container_port = var.image_port
}
}
}
}
}
}
# provider "kubernetes" {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# provider "helm" {
# kubernetes {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# }
If i use the command as it is i got this error in terraform plan that the file is not found and i need to run it twice.
Invalid attribute in provider configuration
with module.deploy_image.provider["registry.terraform.io/hashicorp/kubernetes"],
on modules/kubernetes/main.tf line 13, in provider "kubernetes":
13: provider "kubernetes" {
'config_path' refers to an invalid path: "modules/kubernetes/config": stat modules/kubernetes/config: no such file or directory
and
If I use commented code i get error like this:
│ Error: Unsupported attribute
│
│ on main.tf line 35, in provider "kubernetes":
│ 35: host = "${yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).clusters.0.cluster.server}"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 36, in provider "kubernetes":
│ 36: client_certificate = "${base64decode(yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).users.0.user.client-certificate-data)}"
│
│ Can't access attributes on a primitive-typed value (string).

Failed to retrieve sa token using terraform

I need to retrieve SA token using output in my pipeline, i found an solution in here
Retrieve token data from Kubernetes Service Account in Terraform
but still dont work and get this error:
│ Error: Invalid function argument
│
│ on access.tf line 51, in output "deploy_user_token":
│ 51: value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
│ ├────────────────
│ │ data.kubernetes_secret.deploy_user_secret.data has a sensitive value
│
│ Invalid value for "inputMap" parameter: argument must not be null.
My code:
resource "kubernetes_service_account" "deploy_user" {
depends_on = [kubernetes_namespace.namespace]
metadata {
name = "deploy-user"
namespace = var.namespace
}
}
resource "kubernetes_role" "deploy_user_full_access" {
metadata {
name = "deploy-user-full-access"
namespace = var.namespace
}
rule {
api_groups = ["", "extensions", "apps", "networking.istio.io"]
resources = ["*"]
verbs = ["*"]
}
rule {
api_groups = ["batch"]
resources = ["jobs", "cronjobs"]
verbs = ["*"]
}
}
resource "kubernetes_role_binding" "deploy_user_view" {
metadata {
name = "deploy-user-view"
namespace = var.namespace
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deploy_user_full_access.metadata.0.name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.deploy_user.metadata.0.name
namespace = var.namespace
}
}
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
}
}
output "deploy_user_token" {
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}
someone have an idea that what i do wrong?
Thanks!
it seems that you missing the namespace declaration on your data object, you need it to look like that:
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
namespace = var.namespace
}
}
you also need the set sensitive = true on your output:
output "deploy_user_token" {
sensitive = true
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}

Issue creating a RDS Postregsql instance, with AWS Terraform module

I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?
Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name

Terraform: GKE Cluster with Nodes in different zones

I have this Terraform GKE cluster with 3 nodes. When I deploy this cluster all nodes are localised in the same zones which is europe-west1-b.
gke-cluster.yml
resource "google_container_cluster" "primary" {
name = var.cluster_name
initial_node_count = var.initial_node_count
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
node_config {
//machine_type = "e2-medium"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
metadata = {
disable-legacy-endpoints = "true"
}
labels = {
app = var.app_name
}
tags = ["app", var.app_name]
}
timeouts {
create = "30m"
update = "40m"
}
}
variables.tf
variable "cluster_name" {
default = "cluster"
}
variable "app_name" {
default = "my-app"
}
variable "initial_node_count" {
default = 3
}
variable "kubernetes_min_ver" {
default = "latest"
}
variable "kubernetes_max_ver" {
default = "latest"
}
variable "remove_default_node_pool" {
default = false
}
variable "project" {
default = "your-project-name"
}
variable "credentials" {
default = "terraform-key.json"
}
variable "region" {
default = "europe-west1"
}
variable "zone" {
type = list(string)
description = "The zones to host the cluster in."
default = ["europe-west1-b", "europe-west1-c", "europe-west1-d"]
}
And would like to know if it's possible to deploy each node in a different zone.
If yes how can I do it using Terraform?
Simply add the following line
resource "google_container_cluster" "primary" {
name = "cluster"
location = "us-central1"
initial_node_count = "3"
in order to create a regional cluster. The above will bring up 9 nodes with each zone (f a b) containing 3 nodes. If you only want 1 node per zone, then just change initial_node_count to 1.
More info here at Argument reference.