Issue creating a RDS Postregsql instance, with AWS Terraform module - postgresql

I am attempting to run this module: https://registry.terraform.io/modules/azavea/postgresql-rds/aws/latest Here is the main.tf file created based on the information found there:
provider "aws" {
region = "us-east-2"
access_key = "key_here"
secret_key = "key_here"
}
module postgresql_rds {
source = "github.com/azavea/terraform-aws-postgresql-rds"
vpc_id = "vpc-2470994d"
instance_type = "db.t3.micro"
database_name = "tyler"
database_username = "admin"
database_password = "admin1234"
subnet_group = "tyler-subnet-1"
project = "Postgres-ts"
environment = "Staging"
alarm_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
ok_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
insufficient_data_actions = ["arn:aws:sns:us-east-2:304831784377:tyler-test"]
database_identifier = "jl23kj32sdf"
}
I am getting an error:
Error: Error creating DB Instance: DBSubnetGroupNotFoundFault: DBSubnetGroup 'tyler-subnet-1' not found.
│ status code: 404, request id: a95975dd-5456-444a-8f64-440fc4c1782f
│
│ with module.postgresql_rds.aws_db_instance.postgresql,
│ on .terraform/modules/postgresql_rds/main.tf line 46, in resource "aws_db_instance" "postgresql":
│ 46: resource "aws_db_instance" "postgresql" {
I have tried the example from the page:
subnet_group = aws_db_subnet_group.default.name
I used the default example from the page, under "Usage" ie subnet_group = aws_db_subnet_group.default.name. I have also used the subnet ID from AWS. I also assigned a name to the subnet, and used the name "tyler-subnet-1 in the above main.tf). I am getting the same basic error, with all three attempted inputs. Is there something I'm not understanding about the information that is being requested here?

Assuming you have a default subnet group, you can just use it:
subnet_group = "default"
If not you have to create a custom subnet group using aws_db_subnet_group:
resource "aws_db_subnet_group" "default" {
name = "my-subnet-group"
subnet_ids = [<subnet-id-1>, <subnet-id-2>]
tags = {
Name = "My DB subnet group"
}
}
and use the custom group:
subnet_group = aws_db_subnet_group.default.name

Related

Terraform: can't create `kubernetes_secret` in `google_container_cluster` because "secrets is forbidden"

I have build a kubernetes cluster on google cloud and I am now trying to use the kubernetes_secret resource to create a secret. Here is my configuration:
resource "google_service_account" "default" {
account_id = "gke-service-account"
display_name = "GKE Service Account"
}
resource "google_container_cluster" "cluster" {
name = "${var.cluster-name}-${terraform.workspace}"
location = var.region
initial_node_count = 1
project = var.project-id
remove_default_node_pool = true
}
resource "google_container_node_pool" "cluster_node_pool" {
name = "${var.cluster-name}-${terraform.workspace}-node-pool"
location = var.region
cluster = google_container_cluster.cluster.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-medium"
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
provider "kubernetes" {
host = "https://${google_container_cluster.cluster.endpoint}"
client_certificate = base64decode(google_container_cluster.cluster.master_auth.0.client_certificate)
client_key = base64decode(google_container_cluster.cluster.master_auth.0.client_key)
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
}
resource "kubernetes_secret" "cloudsql-credentials" {
metadata {
name = "database-credentials" # The name of the secret
}
data = {
connection-name = var.database-connection-name
username = var.database-user
password = var.database-password
}
type = "kubernetes.io/basic-auth"
However I get the following error when creating the kubernetes_secret resource:
╷
│ Error: secrets is forbidden: User "system:anonymous" cannot create resource "secrets" in API group "" in the namespace "default"
│
│ with module.kubernetes-cluster.kubernetes_secret.cloudsql-credentials,
│ on gke/main.tf line 58, in resource "kubernetes_secret" "cloudsql-credentials":
│ 58: resource "kubernetes_secret" "cloudsql-credentials" {
│
╵
What am I missing here? I really don't understand. In the documentation I have found the following that could maybe help:
Depending on whether you have a current context set this may require `config_context_auth_info` and/or `config_context_cluster` and/or `config_context`
But it is not clear at all how this should be set and there are no examples provided. Any help will be appreciated. Thank you.

Creating Issue Labels with Terraform using the Github Provider

I'm trying to automate my repository setup with terraform. First thing is creating issue labels for a bunch of repos using the Terraform Guthub provider.
It works when I explicitly state the repo and the labels:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 5.0"
}
}
}
# Use env-var
provider "github" {}
data "github_repository" "trashbox" {
full_name = "sebastian-sommerfeld-io/trashbox"
}
resource "github_issue_label" "bug" {
repository = data.github_repository.trashbox.id
name = "bug"
description = "Something is not working"
color = "B60205"
}
resource "github_issue_label" "security" {
repository = data.github_repository.trashbox.id
name = "security"
description = "CVEs, code scan violations, etc."
color = "cd3ad7"
}
But this would mean that I would have to duplicate everything for another repo. Or at least that I need to update my terraform config manually when I add another repo. I'd prever to have all relevant repos auto-detected.
Auto-detecting works with this snippet ... this returns all repos I want to configure.
data "github_repositories" "repos" {
query = "user:sebastian-sommerfeld-io archived:false"
include_repo_id = true
}
But now I cannot create the labels. When I run terraform apply I always get this error:
Error: POST https://api.github.com/repos/sebastian-sommerfeld-io/sebastian-sommerfeld-io/website-sommerfeld-io/labels: 404 Not Found []
with github_issue_label.bug["sebastian-sommerfeld-io/website-sommerfeld-io"],
on issues.tf line 1, in resource "github_issue_label" "bug":
1: resource "github_issue_label" "bug" {
The odd thing is, that terraform plan does not hint at any error:
# github_issue_label.bug["sebastian-sommerfeld-io/website-sommerfeld-io"] will be created
+ resource "github_issue_label" "bug" {
+ color = "B60205"
+ description = "Something is not working"
+ etag = (known after apply)
+ id = (known after apply)
+ name = "bug"
+ repository = "sebastian-sommerfeld-io/website-sommerfeld-io"
+ url = (known after apply)
}
My complete Terraform config which generates the outputs from terraform plan and terraform apply is this:
terraform {
required_providers {
github = {
source = "integrations/github"
version = "~> 5.0"
}
}
}
# Use env-var
provider "github" {}
data "github_repositories" "repos" {
query = "user:sebastian-sommerfeld-io archived:false"
include_repo_id = true
}
resource "github_issue_label" "bug" {
for_each = toset(data.github_repositories.repos.full_names)
repository = each.value
name = "bug"
description = "Something is not working"
color = "B60205"
}
The repositories are queried correctly. I confirmed this via:
output "affected_repos" {
value = data.github_repositories.repos.full_names
description = "Github Repos"
}
This lists all repos correctly:
affected_repos = tolist([
"sebastian-sommerfeld-io/website-sommerfeld-io",
"sebastian-sommerfeld-io/jarvis",
"sebastian-sommerfeld-io/github-action-generate-docs",
"sebastian-sommerfeld-io/configs",
"sebastian-sommerfeld-io/website-tafelboy-de",
"sebastian-sommerfeld-io/website-numero-uno-de",
"sebastian-sommerfeld-io/website-masterblender-de",
"sebastian-sommerfeld-io/monitoring",
"sebastian-sommerfeld-io/github-action-update-antora-yml",
"sebastian-sommerfeld-io/github-action-generate-readme",
"sebastian-sommerfeld-io/docker-image-tf-graph-beautifier",
"sebastian-sommerfeld-io/docker-image-jq",
"sebastian-sommerfeld-io/docker-image-git",
"sebastian-sommerfeld-io/docker-image-ftp-client",
"sebastian-sommerfeld-io/docker-image-folderslint",
"sebastian-sommerfeld-io/docker-image-adoc-antora",
"sebastian-sommerfeld-io/trashbox",
"sebastian-sommerfeld-io/provinzial",
])
I guess I don't get the for_each stuff right. Can anyone help me? I want to query all my repos taht fit my criteria and add labels to them.
UPDATE: I just detected that with my static approach I pass id, not full_name. I updated my code to this (snippet from above):
resource "github_issue_label" "bug" {
for_each = data.github_repositories.repos.repo_ids
repository = each.value
name = "bug"
description = "Something is not working"
color = "B60205"
}
Now at least the error message is different:
│ Error: Invalid for_each argument
│
│ on issues.tf line 2, in resource "github_issue_label" "bug":
│ 2: for_each = data.github_repositories.repos.repo_ids
│ ├────────────────
│ │ data.github_repositories.repos.repo_ids is list of number with 18 elements
│
│ The given "for_each" argument value is unsuitable: the "for_each" argument
│ must be a map, or set of strings, and you have provided a value of type
│ list of number.

terraform : provision resources on multiple clusters

I'm provisioning multiple k8s clusters using terraform.
On each cluster, I want to create namespaces.
My first attempt didn't work, the resources stayed in status "still creating" forever.
Then I tried to create multiple kubernetes providers.
I'm now facing a problem because they don't have the "count" field.
Indeed I'm creating conditional rules for deciding when to create (or not clusters).
See the logic below:
cluster_europe.tf
resource "azurerm_kubernetes_cluster" "k8sProjectNE" {
count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
name = var.clustername_ne
location = local.rg.location
resource_group_name = local.rg.name
dns_prefix = var.clustername_ne
clusterusa.tf
resource "azurerm_kubernetes_cluster" "k8sProjectUSA" {
count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
name = var.clustername_usa
location = local.rg.location
resource_group_name = local.rg.name
dns_prefix = var.clustername_usa
The problem happened when creating namespaces.
resource "kubernetes_namespace" "NE-staging" {
count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
metadata {
labels = {
mylabel = "staging"
}
name = "staging"
}
depends_on = [azurerm_kubernetes_cluster.k8sProjectNE]
provider = kubernetes.k8sProjectEU
}
resource "kubernetes_namespace" "USA-staging" {
count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
metadata {
labels = {
mylabel = "staging"
}
name = "staging"
}
depends_on = [azurerm_kubernetes_cluster.k8sProjectUSA]
provider = kubernetes.k8sProjectUSA
}
I define 2 kubernetes providers in main.tf and set provider = kubernetes.xxx in the resources that I want to create.
main.tf
provider "kubernetes" {
#count = "${var.DeployK8sProjectUSA == "true" ? 1 : 0}"
host = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.host
username = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.username
password = azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.password
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectUSA[0].kube_admin_config.0.cluster_ca_certificate)}"
alias = "k8sProjectUSA"
}
provider "kubernetes" {
#count = "${var.DeployK8sProjectEU == "true" ? 1 : 0}"
host = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host
username = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.username
password = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.password
client_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_certificate)}"
client_key = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_key)}"
cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.cluster_ca_certificate)}"
alias = "k8sProjectEU"
}
Problem:
The kubernetes provider does not support the count field and it breaks my conditional creation rule.
Error: Invalid index
│
│ on main.tf line 32, in provider "kubernetes":
│ 32: host = azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host
│ ├────────────────
│ │ azurerm_kubernetes_cluster.k8sProjectNE is empty tuple
│
│ The given key does not identify an element in this collection value: the collection has no elements.
╵
I'm close to a solution, but it is failing at this final step.
Above was an attempt to find a solution, but the real need is the following:
I want to conditionnally create clusters ( Infra in USA and/or Infra in Europe and / or DRP infra). This is the reason why we need conditional rules.
Then, on each created clusters we have to be able to create resource (namespace is an example but we also have secrets, etc...).
If I don't define multiple providers, it is not able to connect to the right clusters and generate this kind of error:
Implemented a solution like this:
provider "kubernetes" {
host = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.host,"")
username = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.username,"")
password = try(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.password,"")
client_certificate = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_certificate)}","")
client_key = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.client_key)}","")
cluster_ca_certificate = try("${base64decode(azurerm_kubernetes_cluster.k8sProjectNE[0].kube_admin_config.0.cluster_ca_certificate)}","")
alias = "k8sProjectEU"
}

Error when adding tags to Snowflake resource (role)

I am using:
Terraform v1.2.9
on windows_amd64
provider registry.terraform.io/snowflake-labs/snowflake v0.42.1
My main.tf file is:
terraform {
required_version = ">= 1.1.7"
backend "http" {
}
required_providers {
snowflake = {
source = "Snowflake-Labs/snowflake"
version = "~> 0.42"
}
}
}
provider "snowflake" {
username = "xxxxxx"
account = "yyyyyy"
region = "canada-central.azure"
}
When I add the following tags to a Snowflake role, I have an error. Can you help?
resource "snowflake_role" "operations_manager" {
name = "OPERATIONS_MANAGER"
comment = "A comment"
tag = {
managed-with = "Terraform",
owner = "Joe Smith"
}
}
Error: Unsupported argument
│
│ on functional_roles.tf line 35, in resource "snowflake_role" "operations_manager":
│ 35: tag = {
│
│ An argument named "tag" is not expected here. Did you mean to define a block of type "tag"?

Terraform postgresql provider fails to create the role and database after the provision in aws

I'm trying to provision the postgres in the aws also create the database and roles sequentially using the terraform.
But getting the below exception and i could not able to create the role/db.
terraform {
required_providers {
# postgresql = {
# source = "cyrilgdn/postgresql"
# version = "1.15.0"
# }
postgresql = {
source = "terraform-providers/postgresql"
version = ">=1.7.2"
}
helm = {
source = "hashicorp/helm"
version = "2.4.1"
}
aws = {
source = "hashicorp/aws"
version = "4.0.0"
}
}
}
resource "aws_db_instance" "database" {
identifier = "dev-test"
allocated_storage = 100
storage_type = "gp2"
engine = "postgres"
engine_version = "13.4"
port = 5432
instance_class = "db.t3.micro"
username = "postgres"
performance_insights_enabled = true
password = "postgres$123"
db_subnet_group_name = "some_name"
vpc_security_group_ids = ["sg_name"]
parameter_group_name = "default.postgres13"
publicly_accessible = true
delete_automated_backups = false
storage_encrypted = true
tags = {
Name = "dev-test"
}
skip_final_snapshot = true
}
#To create the "raw" database
provider "postgresql" {
version = ">=1.4.0"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
#creation of the role
resource "postgresql_role" "application_role" {
provider = postgresql
name = "test"
login = true
password = "test$123"
encrypted_password = true
create_database = false
depends_on = [aws_db_instance.database]
}
Error -
Error: dial tcp 18.221.183.66:5432: i/o timeout
│
│ with postgresql_role.application_role,
│ on main.tf line 79, in resource "postgresql_role" "application_role":
│ 79: resource "postgresql_role" "application_role" {
│
╵
I noticed few people are saying to include the expected_version attribute in the latest version should work.
Although including the expected version attribute still the issue persist.
I need to provision the postgres in the aws, create the db and roles.
What could be issue with my script ?
As per documentation [1], you are missing the scheme in the postgresql provider:
provider "postgresql" {
scheme = "awspostgres"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
Additionally, I am not sure if you can use database = raw or it has to be database = "postgres", which is the default value so it does not have to be specified.
One other note: I do not think you need to specify the provider block in every resource. You just define it once in the required_providers block (like you did for aws provider) and then anything related to that provider will assume using the provider defined. In other words, you should remove the version = ">=1.4.0" from the provider "postgres" and provider = postgresql from the resource "postgresql_role" "application_role" and the code should still work.
[1] https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs#aws