Kubernetes Manifest Terraform - kubernetes

I am trying to create a Kubernetes Ingress object with the kubernetes_manifest terraform resource. It is throwing the following error:
│ Error: Failed to morph manifest to OAPI type
│
│ with module.services.module.portal.module.appmesh.kubernetes_manifest.service_ingress_object,
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 104, in resource "kubernetes_manifest" "service_ingress_object":
│ 104: resource "kubernetes_manifest" "service_ingress_object" {
│
│ AttributeName("spec"): [AttributeName("spec")] failed to morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] failed to
│ morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] unsupported morph of object value into type:
│ tftypes.List[tftypes.Object["host":tftypes.String, "http":tftypes.Object["paths":tftypes.List[tftypes.Object["backend":tftypes.Object["resource":tftypes.Object["apiGroup":tftypes.String,
│ "kind":tftypes.String, "name":tftypes.String], "serviceName":tftypes.String, "servicePort":tftypes.DynamicPseudoType], "path":tftypes.String, "pathType":tftypes.String]]]]]
My code is:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = {
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = {
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}
}
}
}
}
}
I have tried adding brackets to the "spec" field, however when I do that, I just the following error:
│ Error: Missing item separator
│
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 121, in resource "kubernetes_manifest" "service_ingress_object":
│ 120: "spec" = {[
│ 121: "rules" = {
│
│ Expected a comma to mark the beginning of the next item.
Once I get that error, I have tried adding commas under "spec". It just continuously throws the same error after this.

I figured it out. You need to add the bracket before the "{". So the code now looks like this:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = [{
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = [{
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}]
}
}]
}
}
}

Alternatively, Kubernetes YAML manifests can be transformed easily to HCL format using this handly CLI tool;
https://github.com/jrhouston/tfk8s
Install the tool.
go install https://github.com/jrhouston/tfk8s
and transform your k8s YAML manifest.
tfk8s -f yaml_manifest.yaml
This will output the HCL format in the CLI stdout.

Related

how to use terraform output to deploy image inside kubernetes cluster in modular approch?

well, I'm new in terraforming and also in Kubernetes, I faced an issue in deploying images after creating a Kubernetes cluster.
I have created a module that creates a Kubernetes cluster and provides output for the Kube config data.
now I'm using the code below but i need to run terraform apply 2 times because first time the local file is not created and terraform could not connect to kubernetes or helm or kubectl but if i run the command twice it works as expected.
any solution?
Note: i also applied the solution as on the comment section of the
code and that comment did now work either.
File : main.tf
module "deploy_lke" {
source = "./modules/linode/kubernetes"
token = var.token
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
instance_type = var.instance_type
number_of_instance = var.number_of_instance
min = var.min
max = var.max
}
module "deploy_image" {
source = "./modules/kubernetes"
kube_config_path = module.deploy_lke.kubeconfig
dockerconfigjson = file("./secret/docker-sec.json")
deploy_name = var.deploy_name
desire_replicas = var.desire_replicas
image_link = var.image_link
image_name = var.image_name
image_port = var.image_port
ip_type = var.ip_type
max_replicas_val = var.max_replicas_val
min_replicas_val = var.min_replicas_val
service_name = var.service_name
}
File : ./module/linode/kubernetes
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.29.4"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "linode" {
token = var.token
}
resource "linode_lke_cluster" "gaintplay-web-lke" {
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
pool {
type = var.instance_type
count = var.number_of_instance
autoscaler {
min = var.min
max = var.max
}
}
lifecycle {
ignore_changes = [
pool.0.count
]
}
}
output "kubeconfig" {
value = linode_lke_cluster.gaintplay-web-lke.kubeconfig
}
output "api_endpoints" {
value = linode_lke_cluster.gaintplay-web-lke.api_endpoints
}
File : ./module/kubernetes
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
resource "local_file" "kube_config_file" {
content = var.kube_config_path
filename = "${path.module}/config"
}
provider "kubernetes" {
config_path = var.kube_config_path
}
provider "helm" {
kubernetes {
config_path = var.kube_config_path
}
}
resource "kubernetes_secret" "docker_secret" {
metadata {
name = "docker-cfg"
}
data = {
".dockerconfigjson" = var.dockerconfigjson
}
type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_deployment" "beta" {
depends_on = [
kubernetes_secret.docker_secret
]
metadata {
name = var.deploy_name
namespace = "default"
}
spec {
replicas = var.desire_replicas
selector {
match_labels = {
app = var.deploy_name
}
}
template {
metadata {
labels = {
app = var.deploy_name
}
}
spec {
image_pull_secrets {
name = kubernetes_secret.docker_secret.metadata[0].name
}
container {
image_pull_policy = "Always"
image = var.image_link
name = var.image_name
port {
container_port = var.image_port
}
}
}
}
}
}
# provider "kubernetes" {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# provider "helm" {
# kubernetes {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# }
If i use the command as it is i got this error in terraform plan that the file is not found and i need to run it twice.
Invalid attribute in provider configuration
with module.deploy_image.provider["registry.terraform.io/hashicorp/kubernetes"],
on modules/kubernetes/main.tf line 13, in provider "kubernetes":
13: provider "kubernetes" {
'config_path' refers to an invalid path: "modules/kubernetes/config": stat modules/kubernetes/config: no such file or directory
and
If I use commented code i get error like this:
│ Error: Unsupported attribute
│
│ on main.tf line 35, in provider "kubernetes":
│ 35: host = "${yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).clusters.0.cluster.server}"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 36, in provider "kubernetes":
│ 36: client_certificate = "${base64decode(yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).users.0.user.client-certificate-data)}"
│
│ Can't access attributes on a primitive-typed value (string).

Error when adding tags to Snowflake resource (role)

I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?

Terraform - kubernetes - create spec env-from if variable exixts

I try to create resources based on variables.
variable.tf
variable "apps" {
default = null
type = map(object({
name = string
type = string
secrets = optional(map(string))
}))
}
terraform.tfvars
apps = {
"myfirst" = {
name = "myfirst"
type = "deploy"
secrets = {
"FIRST_VAR" = "TestVariable",
"SECOND_VAR" = "SecontTestVariable",
"THIRD" = "NothingHere"
}
},
"second" ={
name = "second"
type = "deploy"
secrets = {
"SECRET_VAR" = "SecretVar"
}
},
"simlepod" ={
name = "simplepod"
type = "deploy"
},
"another" ={
name = "another"
type = "pod"
And my main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.9.0"
}
}
experiments = [module_variable_optional_attrs]
}
provider "kubernetes" {
config_path = "~/.kube/config"
# Configuration options
}
resource "kubernetes_secret" "secret" {
for_each = { for k in compact([for k, v in var.apps: v.secrets != null ? k : ""]): k => var.apps[k] }
metadata {
name = "${each.value.name}-secret"
}
data = each.value["secrets"]
}
resource "kubernetes_pod" "test" {
for_each = { for k in compact([for k, v in var.apps: v.type =="deploy" ? k : ""]): k => var.apps[k] }
metadata {
name = "app-${each.value.name}"
}
spec {
container {
image = "nginx:1.21.6"
name = "test-${each.value.name}"
env_from {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
}
}
timeouts {
create = "60s"
}
}
And this produces error because not all objects in apps have secret variable.
terraform plan
╷
│ Warning: Experimental feature "module_variable_optional_attrs" is active
│
│ on main.tf line 8, in terraform:
│ 8: experiments = [module_variable_optional_attrs]
│
│ Experimental features are subject to breaking changes in future minor or patch
│ releases, based on feedback.
│
│ If you have feedback on the design of this feature, please open a GitHub issue to
│ discuss it.
╵
╷
│ Error: Unsupported block type
│
│ on main.tf line 68, in resource "kubernetes_pod" "test":
│ 68: dynamic "secret" {
│
│ Blocks of type "secret" are not expected here.
How to use expression to create env_from only when object has secrets variable?
I've found solution
...
dynamic "env_from" {
for_each = each.value.secrets[*]
content {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
}
...
and seems to work

Failed to retrieve sa token using terraform

I need to retrieve SA token using output in my pipeline, i found an solution in here
Retrieve token data from Kubernetes Service Account in Terraform
but still dont work and get this error:
│ Error: Invalid function argument
│
│ on access.tf line 51, in output "deploy_user_token":
│ 51: value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
│ ├────────────────
│ │ data.kubernetes_secret.deploy_user_secret.data has a sensitive value
│
│ Invalid value for "inputMap" parameter: argument must not be null.
My code:
resource "kubernetes_service_account" "deploy_user" {
depends_on = [kubernetes_namespace.namespace]
metadata {
name = "deploy-user"
namespace = var.namespace
}
}
resource "kubernetes_role" "deploy_user_full_access" {
metadata {
name = "deploy-user-full-access"
namespace = var.namespace
}
rule {
api_groups = ["", "extensions", "apps", "networking.istio.io"]
resources = ["*"]
verbs = ["*"]
}
rule {
api_groups = ["batch"]
resources = ["jobs", "cronjobs"]
verbs = ["*"]
}
}
resource "kubernetes_role_binding" "deploy_user_view" {
metadata {
name = "deploy-user-view"
namespace = var.namespace
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deploy_user_full_access.metadata.0.name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.deploy_user.metadata.0.name
namespace = var.namespace
}
}
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
}
}
output "deploy_user_token" {
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}
someone have an idea that what i do wrong?
Thanks!
it seems that you missing the namespace declaration on your data object, you need it to look like that:
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
namespace = var.namespace
}
}
you also need the set sensitive = true on your output:
output "deploy_user_token" {
sensitive = true
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}

Terraform: module outputs not being recognised as variables

I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.
My main.tf call just two modules, gke for the google kubernetes engine and storage which creates a persistent volume on the cluster created previously.
Module gke has an outputs.tf which outputs the following:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
Then in the main.tf for the storage module, I have:
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
Then in the root main.tf I have the following:
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
From what I see, it looks right. The values for the certs, key and host variables should be outputted from the gke module by outputs.tf, picked up by main.tf of root, and then delivered to storage as a regular variable.
Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.
I get questioned about the variable not being filled when I run a plan.
EDIT:
Adding some additional information including my code.
If I manually add dummy entries for the variables it's asking for I get the following error:
Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.
Code below:
Folder structure:
root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
root-folder/gke/main.tf:
provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
root-folder/gke/outputs.tf:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
root-folder/gke/variables.tf:
variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
/root-folder/storage/main.cf:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
/root/storage/variables.tf:
variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
/root-folder/main.tf:
module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
/root-folder/variables.tf:
variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
I won't paste the contents of my staging.json and terraform.tfvars for obvious reasons :)
In your /root-folder/variables.tf, delete the following entries:
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.