I try to create resources based on variables.
variable.tf
variable "apps" {
default = null
type = map(object({
name = string
type = string
secrets = optional(map(string))
}))
}
terraform.tfvars
apps = {
"myfirst" = {
name = "myfirst"
type = "deploy"
secrets = {
"FIRST_VAR" = "TestVariable",
"SECOND_VAR" = "SecontTestVariable",
"THIRD" = "NothingHere"
}
},
"second" ={
name = "second"
type = "deploy"
secrets = {
"SECRET_VAR" = "SecretVar"
}
},
"simlepod" ={
name = "simplepod"
type = "deploy"
},
"another" ={
name = "another"
type = "pod"
And my main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.9.0"
}
}
experiments = [module_variable_optional_attrs]
}
provider "kubernetes" {
config_path = "~/.kube/config"
# Configuration options
}
resource "kubernetes_secret" "secret" {
for_each = { for k in compact([for k, v in var.apps: v.secrets != null ? k : ""]): k => var.apps[k] }
metadata {
name = "${each.value.name}-secret"
}
data = each.value["secrets"]
}
resource "kubernetes_pod" "test" {
for_each = { for k in compact([for k, v in var.apps: v.type =="deploy" ? k : ""]): k => var.apps[k] }
metadata {
name = "app-${each.value.name}"
}
spec {
container {
image = "nginx:1.21.6"
name = "test-${each.value.name}"
env_from {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
}
}
timeouts {
create = "60s"
}
}
And this produces error because not all objects in apps have secret variable.
terraform plan
╷
│ Warning: Experimental feature "module_variable_optional_attrs" is active
│
│ on main.tf line 8, in terraform:
│ 8: experiments = [module_variable_optional_attrs]
│
│ Experimental features are subject to breaking changes in future minor or patch
│ releases, based on feedback.
│
│ If you have feedback on the design of this feature, please open a GitHub issue to
│ discuss it.
╵
╷
│ Error: Unsupported block type
│
│ on main.tf line 68, in resource "kubernetes_pod" "test":
│ 68: dynamic "secret" {
│
│ Blocks of type "secret" are not expected here.
How to use expression to create env_from only when object has secrets variable?
I've found solution
...
dynamic "env_from" {
for_each = each.value.secrets[*]
content {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
}
...
and seems to work
Related
I am unable to create an RDS due to failure in creating a subnet. I have different modules that I use to create an AWS infrastructure.
The main ones that i am having trouble with is RDS an VPC, where in the first one i create the database:
rds/main.tf
resource "aws_db_parameter_group" "education" {
name = "education"
family = "postgres14"
parameter {
name = "log_connections"
value = "1"
}
}
resource "aws_db_instance" "education" {
identifier = "education"
instance_class = "db.t3.micro"
allocated_storage = 5
engine = "postgres"
engine_version = "14.1"
username = "edu"
password = var.db_password
db_subnet_group_name = var.database_subnets
vpc_security_group_ids = var.rds_service_security_groups
parameter_group_name = aws_db_parameter_group.education.name
publicly_accessible = false
skip_final_snapshot = true
}
rds/variables.tf
variable "db_username" {
description = "RDS root username"
default = "someusername"
}
variable "db_password" {
description = "RDS root user password"
sensitive = true
}
variable "vpc_id" {
description = "VPC ID"
}
variable "rds_service_security_groups" {
description = "Comma separated list of security groups"
}
variable "database_subnets" {
description = "List of private subnets"
}
And the latter where i create the subnets and etc.
vpc/main.tf
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.private_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.private_subnets)
tags = {
Name = "${var.name}-private-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.public_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.public_subnets)
map_public_ip_on_launch = true
tags = {
Name = "${var.name}-public-subnet-${var.environment}-${format("%03d", count.index+1)}"
Environment = var.environment
}
}
resource "aws_subnet" "database" {
vpc_id = aws_vpc.main.id
cidr_block = element(var.database_subnets, count.index)
availability_zone = element(var.availability_zones, count.index)
count = length(var.database_subnets)
tags = {
Name = "Education"
Environment = var.environment
}
}
vpc/variables.tf
variable "name" {
description = "the name of the stack"
}
variable "environment" {
description = "the name of the environment "
}
variable "cidr" {
description = "The CIDR block for the VPC."
}
variable "public_subnets" {
description = "List of public subnets"
}
variable "private_subnets" {
description = "List of private subnets"
}
variable "database_subnets" {
description = "Database subnetes"
}
variable "availability_zones" {
description = "List of availability zones"
}
Then in the root directory i have a main.tf file where i create everything. In there i call the rds module
main.tf
module "rds" {
source = "./rds"
vpc_id = module.vpc.id
database_subnets = module.vpc.database_subnets
rds_service_security_groups = [module.security_groups.rds]
db_password = var.db_password
}
The error that i keep getting is this
Error: Incorrect attribute value type
│
│ on rds\\main.tf line 19, in resource "aws_db_instance" "education":
│ 19: db_subnet_group_name = var.database_subnets
│ ├────────────────
│ │ var.database_subnets is tuple with 2 elements
│
│ Inappropriate value for attribute "db_subnet_group_name": string required.
Any idea how i can fix it?
You are trying to pass a list of DB Subnets into a parameter that takes a DB Subnet Group name.
You need to modify your RDS module to create a DB Subnet Group with the given subnet IDs, and then pass that group name to the instance:
resource "aws_db_subnet_group" "education" {
name = "education"
subnet_ids = var.database_subnets
}
resource "aws_db_instance" "education" {
identifier = "education"
db_subnet_group_name = aws_db_subnet_group.education.name
...
}
well, I'm new in terraforming and also in Kubernetes, I faced an issue in deploying images after creating a Kubernetes cluster.
I have created a module that creates a Kubernetes cluster and provides output for the Kube config data.
now I'm using the code below but i need to run terraform apply 2 times because first time the local file is not created and terraform could not connect to kubernetes or helm or kubectl but if i run the command twice it works as expected.
any solution?
Note: i also applied the solution as on the comment section of the
code and that comment did now work either.
File : main.tf
module "deploy_lke" {
source = "./modules/linode/kubernetes"
token = var.token
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
instance_type = var.instance_type
number_of_instance = var.number_of_instance
min = var.min
max = var.max
}
module "deploy_image" {
source = "./modules/kubernetes"
kube_config_path = module.deploy_lke.kubeconfig
dockerconfigjson = file("./secret/docker-sec.json")
deploy_name = var.deploy_name
desire_replicas = var.desire_replicas
image_link = var.image_link
image_name = var.image_name
image_port = var.image_port
ip_type = var.ip_type
max_replicas_val = var.max_replicas_val
min_replicas_val = var.min_replicas_val
service_name = var.service_name
}
File : ./module/linode/kubernetes
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.29.4"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "linode" {
token = var.token
}
resource "linode_lke_cluster" "gaintplay-web-lke" {
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
pool {
type = var.instance_type
count = var.number_of_instance
autoscaler {
min = var.min
max = var.max
}
}
lifecycle {
ignore_changes = [
pool.0.count
]
}
}
output "kubeconfig" {
value = linode_lke_cluster.gaintplay-web-lke.kubeconfig
}
output "api_endpoints" {
value = linode_lke_cluster.gaintplay-web-lke.api_endpoints
}
File : ./module/kubernetes
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
resource "local_file" "kube_config_file" {
content = var.kube_config_path
filename = "${path.module}/config"
}
provider "kubernetes" {
config_path = var.kube_config_path
}
provider "helm" {
kubernetes {
config_path = var.kube_config_path
}
}
resource "kubernetes_secret" "docker_secret" {
metadata {
name = "docker-cfg"
}
data = {
".dockerconfigjson" = var.dockerconfigjson
}
type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_deployment" "beta" {
depends_on = [
kubernetes_secret.docker_secret
]
metadata {
name = var.deploy_name
namespace = "default"
}
spec {
replicas = var.desire_replicas
selector {
match_labels = {
app = var.deploy_name
}
}
template {
metadata {
labels = {
app = var.deploy_name
}
}
spec {
image_pull_secrets {
name = kubernetes_secret.docker_secret.metadata[0].name
}
container {
image_pull_policy = "Always"
image = var.image_link
name = var.image_name
port {
container_port = var.image_port
}
}
}
}
}
}
# provider "kubernetes" {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# provider "helm" {
# kubernetes {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# }
If i use the command as it is i got this error in terraform plan that the file is not found and i need to run it twice.
Invalid attribute in provider configuration
with module.deploy_image.provider["registry.terraform.io/hashicorp/kubernetes"],
on modules/kubernetes/main.tf line 13, in provider "kubernetes":
13: provider "kubernetes" {
'config_path' refers to an invalid path: "modules/kubernetes/config": stat modules/kubernetes/config: no such file or directory
and
If I use commented code i get error like this:
│ Error: Unsupported attribute
│
│ on main.tf line 35, in provider "kubernetes":
│ 35: host = "${yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).clusters.0.cluster.server}"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 36, in provider "kubernetes":
│ 36: client_certificate = "${base64decode(yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).users.0.user.client-certificate-data)}"
│
│ Can't access attributes on a primitive-typed value (string).
I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?
I need to retrieve SA token using output in my pipeline, i found an solution in here
Retrieve token data from Kubernetes Service Account in Terraform
but still dont work and get this error:
│ Error: Invalid function argument
│
│ on access.tf line 51, in output "deploy_user_token":
│ 51: value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
│ ├────────────────
│ │ data.kubernetes_secret.deploy_user_secret.data has a sensitive value
│
│ Invalid value for "inputMap" parameter: argument must not be null.
My code:
resource "kubernetes_service_account" "deploy_user" {
depends_on = [kubernetes_namespace.namespace]
metadata {
name = "deploy-user"
namespace = var.namespace
}
}
resource "kubernetes_role" "deploy_user_full_access" {
metadata {
name = "deploy-user-full-access"
namespace = var.namespace
}
rule {
api_groups = ["", "extensions", "apps", "networking.istio.io"]
resources = ["*"]
verbs = ["*"]
}
rule {
api_groups = ["batch"]
resources = ["jobs", "cronjobs"]
verbs = ["*"]
}
}
resource "kubernetes_role_binding" "deploy_user_view" {
metadata {
name = "deploy-user-view"
namespace = var.namespace
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deploy_user_full_access.metadata.0.name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.deploy_user.metadata.0.name
namespace = var.namespace
}
}
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
}
}
output "deploy_user_token" {
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}
someone have an idea that what i do wrong?
Thanks!
it seems that you missing the namespace declaration on your data object, you need it to look like that:
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
namespace = var.namespace
}
}
you also need the set sensitive = true on your output:
output "deploy_user_token" {
sensitive = true
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}
I am trying to create a Kubernetes Ingress object with the kubernetes_manifest terraform resource. It is throwing the following error:
│ Error: Failed to morph manifest to OAPI type
│
│ with module.services.module.portal.module.appmesh.kubernetes_manifest.service_ingress_object,
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 104, in resource "kubernetes_manifest" "service_ingress_object":
│ 104: resource "kubernetes_manifest" "service_ingress_object" {
│
│ AttributeName("spec"): [AttributeName("spec")] failed to morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] failed to
│ morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] unsupported morph of object value into type:
│ tftypes.List[tftypes.Object["host":tftypes.String, "http":tftypes.Object["paths":tftypes.List[tftypes.Object["backend":tftypes.Object["resource":tftypes.Object["apiGroup":tftypes.String,
│ "kind":tftypes.String, "name":tftypes.String], "serviceName":tftypes.String, "servicePort":tftypes.DynamicPseudoType], "path":tftypes.String, "pathType":tftypes.String]]]]]
My code is:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = {
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = {
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}
}
}
}
}
}
I have tried adding brackets to the "spec" field, however when I do that, I just the following error:
│ Error: Missing item separator
│
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 121, in resource "kubernetes_manifest" "service_ingress_object":
│ 120: "spec" = {[
│ 121: "rules" = {
│
│ Expected a comma to mark the beginning of the next item.
Once I get that error, I have tried adding commas under "spec". It just continuously throws the same error after this.
I figured it out. You need to add the bracket before the "{". So the code now looks like this:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = [{
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = [{
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}]
}
}]
}
}
}
Alternatively, Kubernetes YAML manifests can be transformed easily to HCL format using this handly CLI tool;
https://github.com/jrhouston/tfk8s
Install the tool.
go install https://github.com/jrhouston/tfk8s
and transform your k8s YAML manifest.
tfk8s -f yaml_manifest.yaml
This will output the HCL format in the CLI stdout.