Error when adding tags to Snowflake resource (role) - tags

I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?

Related

how to use terraform output to deploy image inside kubernetes cluster in modular approch?

well, I'm new in terraforming and also in Kubernetes, I faced an issue in deploying images after creating a Kubernetes cluster.
I have created a module that creates a Kubernetes cluster and provides output for the Kube config data.
now I'm using the code below but i need to run terraform apply 2 times because first time the local file is not created and terraform could not connect to kubernetes or helm or kubectl but if i run the command twice it works as expected.
any solution?
Note: i also applied the solution as on the comment section of the
code and that comment did now work either.
File : main.tf
module "deploy_lke" {
source = "./modules/linode/kubernetes"
token = var.token
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
instance_type = var.instance_type
number_of_instance = var.number_of_instance
min = var.min
max = var.max
}
module "deploy_image" {
source = "./modules/kubernetes"
kube_config_path = module.deploy_lke.kubeconfig
dockerconfigjson = file("./secret/docker-sec.json")
deploy_name = var.deploy_name
desire_replicas = var.desire_replicas
image_link = var.image_link
image_name = var.image_name
image_port = var.image_port
ip_type = var.ip_type
max_replicas_val = var.max_replicas_val
min_replicas_val = var.min_replicas_val
service_name = var.service_name
}
File : ./module/linode/kubernetes
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.29.4"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
provider "linode" {
token = var.token
}
resource "linode_lke_cluster" "gaintplay-web-lke" {
k8s_version = var.k8s_version
label = var.label
region = var.region
tags = var.tags
pool {
type = var.instance_type
count = var.number_of_instance
autoscaler {
min = var.min
max = var.max
}
}
lifecycle {
ignore_changes = [
pool.0.count
]
}
}
output "kubeconfig" {
value = linode_lke_cluster.gaintplay-web-lke.kubeconfig
}
output "api_endpoints" {
value = linode_lke_cluster.gaintplay-web-lke.api_endpoints
}
File : ./module/kubernetes
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.0.0"
}
}
}
resource "local_file" "kube_config_file" {
content = var.kube_config_path
filename = "${path.module}/config"
}
provider "kubernetes" {
config_path = var.kube_config_path
}
provider "helm" {
kubernetes {
config_path = var.kube_config_path
}
}
resource "kubernetes_secret" "docker_secret" {
metadata {
name = "docker-cfg"
}
data = {
".dockerconfigjson" = var.dockerconfigjson
}
type = "kubernetes.io/dockerconfigjson"
}
resource "kubernetes_deployment" "beta" {
depends_on = [
kubernetes_secret.docker_secret
]
metadata {
name = var.deploy_name
namespace = "default"
}
spec {
replicas = var.desire_replicas
selector {
match_labels = {
app = var.deploy_name
}
}
template {
metadata {
labels = {
app = var.deploy_name
}
}
spec {
image_pull_secrets {
name = kubernetes_secret.docker_secret.metadata[0].name
}
container {
image_pull_policy = "Always"
image = var.image_link
name = var.image_name
port {
container_port = var.image_port
}
}
}
}
}
}
# provider "kubernetes" {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# provider "helm" {
# kubernetes {
# host = "${yamldecode(var.kube_config_path).clusters.0.cluster.server}"
# client_certificate = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-certificate-data)}"
# client_key = "${base64decode(yamldecode(var.kube_config_path).users.0.user.client-key-data)}"
# cluster_ca_certificate = "${base64decode(yamldecode(var.kube_config_path).clusters.0.cluster.certificate-authority-data)}"
# }
# }
If i use the command as it is i got this error in terraform plan that the file is not found and i need to run it twice.
Invalid attribute in provider configuration
with module.deploy_image.provider["registry.terraform.io/hashicorp/kubernetes"],
on modules/kubernetes/main.tf line 13, in provider "kubernetes":
13: provider "kubernetes" {
'config_path' refers to an invalid path: "modules/kubernetes/config": stat modules/kubernetes/config: no such file or directory
and
If I use commented code i get error like this:
│ Error: Unsupported attribute
│
│ on main.tf line 35, in provider "kubernetes":
│ 35: host = "${yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).clusters.0.cluster.server}"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on main.tf line 36, in provider "kubernetes":
│ 36: client_certificate = "${base64decode(yamldecode(linode_lke_cluster.gaintplay-web-lke.kubeconfig).users.0.user.client-certificate-data)}"
│
│ Can't access attributes on a primitive-typed value (string).

Terraform - kubernetes - create spec env-from if variable exixts

I try to create resources based on variables.
variable.tf
variable "apps" {
default = null
type = map(object({
name = string
type = string
secrets = optional(map(string))
}))
}
terraform.tfvars
apps = {
"myfirst" = {
name = "myfirst"
type = "deploy"
secrets = {
"FIRST_VAR" = "TestVariable",
"SECOND_VAR" = "SecontTestVariable",
"THIRD" = "NothingHere"
}
},
"second" ={
name = "second"
type = "deploy"
secrets = {
"SECRET_VAR" = "SecretVar"
}
},
"simlepod" ={
name = "simplepod"
type = "deploy"
},
"another" ={
name = "another"
type = "pod"
And my main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.9.0"
}
}
experiments = [module_variable_optional_attrs]
}
provider "kubernetes" {
config_path = "~/.kube/config"
# Configuration options
}
resource "kubernetes_secret" "secret" {
for_each = { for k in compact([for k, v in var.apps: v.secrets != null ? k : ""]): k => var.apps[k] }
metadata {
name = "${each.value.name}-secret"
}
data = each.value["secrets"]
}
resource "kubernetes_pod" "test" {
for_each = { for k in compact([for k, v in var.apps: v.type =="deploy" ? k : ""]): k => var.apps[k] }
metadata {
name = "app-${each.value.name}"
}
spec {
container {
image = "nginx:1.21.6"
name = "test-${each.value.name}"
env_from {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
}
}
timeouts {
create = "60s"
}
}
And this produces error because not all objects in apps have secret variable.
terraform plan
╷
│ Warning: Experimental feature "module_variable_optional_attrs" is active
│
│ on main.tf line 8, in terraform:
│ 8: experiments = [module_variable_optional_attrs]
│
│ Experimental features are subject to breaking changes in future minor or patch
│ releases, based on feedback.
│
│ If you have feedback on the design of this feature, please open a GitHub issue to
│ discuss it.
╵
╷
│ Error: Unsupported block type
│
│ on main.tf line 68, in resource "kubernetes_pod" "test":
│ 68: dynamic "secret" {
│
│ Blocks of type "secret" are not expected here.
How to use expression to create env_from only when object has secrets variable?
I've found solution
...
dynamic "env_from" {
for_each = each.value.secrets[*]
content {
secret_ref {
name = kubernetes_secret.secret[each.value.name].metadata[0].name
}
}
}
...
and seems to work

Failed to retrieve sa token using terraform

I need to retrieve SA token using output in my pipeline, i found an solution in here
Retrieve token data from Kubernetes Service Account in Terraform
but still dont work and get this error:
│ Error: Invalid function argument
│
│ on access.tf line 51, in output "deploy_user_token":
│ 51: value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
│ ├────────────────
│ │ data.kubernetes_secret.deploy_user_secret.data has a sensitive value
│
│ Invalid value for "inputMap" parameter: argument must not be null.
My code:
resource "kubernetes_service_account" "deploy_user" {
depends_on = [kubernetes_namespace.namespace]
metadata {
name = "deploy-user"
namespace = var.namespace
}
}
resource "kubernetes_role" "deploy_user_full_access" {
metadata {
name = "deploy-user-full-access"
namespace = var.namespace
}
rule {
api_groups = ["", "extensions", "apps", "networking.istio.io"]
resources = ["*"]
verbs = ["*"]
}
rule {
api_groups = ["batch"]
resources = ["jobs", "cronjobs"]
verbs = ["*"]
}
}
resource "kubernetes_role_binding" "deploy_user_view" {
metadata {
name = "deploy-user-view"
namespace = var.namespace
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.deploy_user_full_access.metadata.0.name
}
subject {
kind = "ServiceAccount"
name = kubernetes_service_account.deploy_user.metadata.0.name
namespace = var.namespace
}
}
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
}
}
output "deploy_user_token" {
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}
someone have an idea that what i do wrong?
Thanks!
it seems that you missing the namespace declaration on your data object, you need it to look like that:
data "kubernetes_secret" "deploy_user_secret" {
metadata {
name = kubernetes_service_account.deploy_user.default_secret_name
namespace = var.namespace
}
}
you also need the set sensitive = true on your output:
output "deploy_user_token" {
sensitive = true
value = lookup(data.kubernetes_secret.deploy_user_secret.data, "token")
}

Kubernetes Manifest Terraform

I am trying to create a Kubernetes Ingress object with the kubernetes_manifest terraform resource. It is throwing the following error:
│ Error: Failed to morph manifest to OAPI type
│
│ with module.services.module.portal.module.appmesh.kubernetes_manifest.service_ingress_object,
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 104, in resource "kubernetes_manifest" "service_ingress_object":
│ 104: resource "kubernetes_manifest" "service_ingress_object" {
│
│ AttributeName("spec"): [AttributeName("spec")] failed to morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] failed to
│ morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] unsupported morph of object value into type:
│ tftypes.List[tftypes.Object["host":tftypes.String, "http":tftypes.Object["paths":tftypes.List[tftypes.Object["backend":tftypes.Object["resource":tftypes.Object["apiGroup":tftypes.String,
│ "kind":tftypes.String, "name":tftypes.String], "serviceName":tftypes.String, "servicePort":tftypes.DynamicPseudoType], "path":tftypes.String, "pathType":tftypes.String]]]]]
My code is:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = {
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = {
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}
}
}
}
}
}
I have tried adding brackets to the "spec" field, however when I do that, I just the following error:
│ Error: Missing item separator
│
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 121, in resource "kubernetes_manifest" "service_ingress_object":
│ 120: "spec" = {[
│ 121: "rules" = {
│
│ Expected a comma to mark the beginning of the next item.
Once I get that error, I have tried adding commas under "spec". It just continuously throws the same error after this.
I figured it out. You need to add the bracket before the "{". So the code now looks like this:
resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = [{
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = [{
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}]
}
}]
}
}
}
Alternatively, Kubernetes YAML manifests can be transformed easily to HCL format using this handly CLI tool;
https://github.com/jrhouston/tfk8s
Install the tool.
go install https://github.com/jrhouston/tfk8s
and transform your k8s YAML manifest.
tfk8s -f yaml_manifest.yaml
This will output the HCL format in the CLI stdout.

Issue creating a RDS Postregsql instance, with AWS Terraform module

I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?
Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name