Cannot delete an instance of module in Terraform which contains a provider - postgresql

I have a module which contains resources for:
azure postgres server
azure postgres database
postgres role (user)
postgres provider (for the server and used to create the role)
In one of my env directories I can have 0-N .tf files which is an instance of that module and each specify database name etc. So if I add another .tf file with a new name then a new database server with a database will be provisioned. All this works fine.
However, if I now delete an existing database module (one of the .tf files in my env directory) I run into issues. Terraform will now try to get the state of all the previously existing resources and since that specific provider (for that postgres server) now is gone terraform cannot get the state of the created postgres role, with the output a provider configuration block is required for all operations.
I understand why this happens but I cannot figure out how to solve this. I want to "dynamically" create (and remove) postgres servers with a database on them but this requires "dynamic" providers which then makes me get stuck on this.
Example of how it looks
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
Above you see how we, in the module create a postgres server, postgres db and also a postgres role (where only the role utilizes the postgres provider). So if I now define an instance datadb.tf such as:
module "datadb" {
source = "../../modules/postgres"
db_name = "datadb"
resource_group = "${azurerm_resource_group.resource-group.name}"
location = "${azurerm_resource_group.resource-group.location}"
}
then it will be provisioned successfully. The issue is if I later on delete that same file (datadb.tf) then the planning fails because it will try to get the state of the postgres role without having the postgres provider present.
The postgres provider is only needed for the postgres role which will be destroyed as soon as the azure provider destroys the postgres db and postgres server, so the actual removal of that role is not necessary. Is there a way to tell terraform that "if this resource should be removed, you don't have to do anything because it will be removed dependent on being removed"? Or does anyone see any other solutions?
I hope my goal and issue is clear, thanks!

I think the only solution is a two-step solution, but I think it's still clean enough.
What I would do is have two files per database (name them how you want).
db-1-infra.tf
db-1-pgsql.tf
Put everything except your postgres resources in db-1-infra.tf
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
Put your PostgreSQL resources in db-1-pgsql.tf
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
When you want to get rid of your database, first delete the file db-1-pgsql.tf and apply. Next, delete db-1-infra.tf and apply again.
The first step will destroy all postgres resources and free you up to run the second step, which will remove the postgres provider for that database.

Related

Terraform postgresql provider fails to create the role and database after the provision in aws

I'm trying to provision the postgres in the aws also create the database and roles sequentially using the terraform.
But getting the below exception and i could not able to create the role/db.
terraform {
required_providers {
# postgresql = {
# source = "cyrilgdn/postgresql"
# version = "1.15.0"
# }
postgresql = {
source = "terraform-providers/postgresql"
version = ">=1.7.2"
}
helm = {
source = "hashicorp/helm"
version = "2.4.1"
}
aws = {
source = "hashicorp/aws"
version = "4.0.0"
}
}
}
resource "aws_db_instance" "database" {
identifier = "dev-test"
allocated_storage = 100
storage_type = "gp2"
engine = "postgres"
engine_version = "13.4"
port = 5432
instance_class = "db.t3.micro"
username = "postgres"
performance_insights_enabled = true
password = "postgres$123"
db_subnet_group_name = "some_name"
vpc_security_group_ids = ["sg_name"]
parameter_group_name = "default.postgres13"
publicly_accessible = true
delete_automated_backups = false
storage_encrypted = true
tags = {
Name = "dev-test"
}
skip_final_snapshot = true
}
#To create the "raw" database
provider "postgresql" {
version = ">=1.4.0"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
#creation of the role
resource "postgresql_role" "application_role" {
provider = postgresql
name = "test"
login = true
password = "test$123"
encrypted_password = true
create_database = false
depends_on = [aws_db_instance.database]
}
Error -
Error: dial tcp 18.221.183.66:5432: i/o timeout
│
│ with postgresql_role.application_role,
│ on main.tf line 79, in resource "postgresql_role" "application_role":
│ 79: resource "postgresql_role" "application_role" {
│
╵
I noticed few people are saying to include the expected_version attribute in the latest version should work.
Although including the expected version attribute still the issue persist.
I need to provision the postgres in the aws, create the db and roles.
What could be issue with my script ?
As per documentation [1], you are missing the scheme in the postgresql provider:
provider "postgresql" {
scheme = "awspostgres"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
Additionally, I am not sure if you can use database = raw or it has to be database = "postgres", which is the default value so it does not have to be specified.
One other note: I do not think you need to specify the provider block in every resource. You just define it once in the required_providers block (like you did for aws provider) and then anything related to that provider will assume using the provider defined. In other words, you should remove the version = ">=1.4.0" from the provider "postgres" and provider = postgresql from the resource "postgresql_role" "application_role" and the code should still work.
[1] https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs#aws

How can I add a custom variable to Sqitch, to be used in target postgres

I would like to add few variables:"username' and "database" in my sqitch.conf on a defined target.
file sqitch.conf=>
engine = pg
[core "variables"]
username = jv_root
database = test
[target "dev_1"]
uri = db:pg://username#sqlhost:5432/database
[target "dev_2"]
uri = db:pg://username#sqlhost2:5432/database
where I run:
sqitch deploy -t dev_1
it throw an error =>
ERROR: no such user: username
You can add environment specific variables like this.
[target.dev_1.variables]
username = jv_root
password = test
How you address them in your sql files depends on the sql dialect.

Create schema for Google Cloud SQL PostgreSQL database using Terraform

I'm new to Terraform, and I want to create a schema for the postgres database created on a PostgreSQL 9.6 instance on Google cloud SQL.
To create the PostgreSQL instance I have this on main.tf:
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
}
}
}
The I was trying to create a PostgreSQL object like this:
provider "postgresql" {
host = "${google_sql_database_instance.my-database.ip_address}"
username = "postgres"
}
Finally creating the schema:
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "postgres"
}
However, this configurations do not work, we I run terraform plan:
Inappropriate value for attribute "host": string required.
If I remove the Postgres object:
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
Additionally, I would like to add a password for the user postgres which is created by default when the PostgreSQL instance is created.
EDITED:
versions used
Terraform v0.12.10
+ provider.google v2.17.0
+ provider.postgresql v1.2.0
Any suggestions?
There are a few issues with the terraform set up that you have above.
Your instance does not have any authorized networks defined. You should change your instance resource to look like this: (Note: I used 0.0.0.0/0 just for testing purposes)
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "all"
value = "0.0.0.0/0"
}
}
}
depends_on = [
"google_project_services.vpc"
]
}
As mentioned here, you need to create a user with a strong password
resource "google_sql_user" "user" {
name = "test_user"
instance = "${google_sql_database_instance.my-database.name}"
password = "VeryStrongPassword"
depends_on = [
"google_sql_database_instance.my-database"
]
}
You should use the "public_ip_address" or "ip_address.0.ip_address" attribute of your instance to access the ip address. Also, you should update your provider and schema resource to reflect the user created above.
provider "postgresql" {
host = "${google_sql_database_instance.my-database.public_ip_address}"
username = "${google_sql_user.user.name}"
password = "${google_sql_user.user.password}"
}
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "test_user"
}
Your postgres provider is dependent on the google_sql_database_instance resource to be done before it is able to set up the provider:
All the providers are initialized at the beginning of plan/apply so if one has an invalid config (in this case an empty host) then Terraform will fail.
There is no way to define the dependency between a provider and a
resource within another provider.
There is however a workaround by using the target parameter
terraform apply -target=google_sql_user.user
This will create the database user (as well as all its dependencies - in this case the database instance) and once that completes follow it with:
terraform apply
This should then succeed as the instance has already been created and the ip_address is available to be used by the postgres provider.
Final Note: Usage of public ip addresses without SSL to connect to Cloud SQL instances is not recommended for production instances.
This was my solution, and this way I just need to run: terraform apply :
// POSTGRESQL INSTANCE
resource "google_sql_database_instance" "my-database" {
database_version = "POSTGRES_9_6"
region = var.deployment_region
settings {
tier = var.db_machine_type
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "my_ip"
value = var.db_allowed_networks.my_network_ip
}
}
}
}
// DATABASE USER
resource "google_sql_user" "user" {
name = var.db_credentials.db_user
instance = google_sql_database_instance.my-database.name
password = var.db_credentials.db_password
depends_on = [
"google_sql_database_instance.my-database"
]
provisioner "local-exec" {
command = "psql postgresql://${google_sql_user.user.name}:${google_sql_user.user.password}#${google_sql_database_instance.my-database.public_ip_address}/postgres -c \"CREATE SCHEMA myschema;\""
}
}

AWS Terraform postgresql provider: SSL is not enabled on the server

I am trying to create a database in the created postgres RDS in AWS with postgresql provider. The terraform script i have created is as following:
resource "aws_db_instance" "test_rds" {
allocated_storage = "" # gigabytes
backup_retention_period = 7 # in days
engine = ""
engine_version = ""
identifier = ""
instance_class = ""
multi_az = ""
name = ""
username = ""
password = ""
port = ""
publicly_accessible = "false"
storage_encrypted = "false"
storage_type = ""
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.name}"
}
The postgresql provider is as following:
# Create databases in rds
provider "postgresql" {
alias = "alias"
host = "${aws_db_instance.test_rds.address}"
port = 5432
username =
password =
database =
sslmode = "disable"
}
# Create user in rds
resource "postgresql_role" "test_role" {
name =
replication = true
login = true
password =
}
# Create database rds
resource "postgresql_database" "test_db" {
name = testdb
owner = "${postgresql_role.test_role.name}"
lc_collate = "C"
allow_connections = true
provider = "postgresql.alias"
}
Anyway i keep retrieving
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: pq: SSL is not enabled on the server
Note: the empty fields are already filled and the RDS is successfully created, the problem rises when trying to create the database in the rds with the postgresql provider.
We ran into this issue as well, and the problem was that the password was not defined. It seems that we will get the SSL is not enabled error when it has problems connecting. We also had the same problem when the db host name was missing. You will need to make sure you define all of the fields needed to connect in Terraform (probably database and username too).
Ensuring there was a password set for the postgres user and disabled sslmode, did it for me
sslmode = "disable"

How to destroy Postgres databases owned by other users in RDS using Terraform?

I managed to nicely get Terraform creating databases and roles in an RDS Postgres database but due to the stripped down permissions of the rds_superuser I can't see an easy way to destroy the created databases that are owned by another user.
Using the following config:
resource "postgresql_role" "app" {
name = "app"
login = true
password = "foo"
skip_reassign_owned = true
}
resource "postgresql_database" "database" {
name = "app_database"
owner = "${postgresql_role.app.name}"
}
(For reference the skip_reassign_owned is required because the rds_superuser group doesn't get the necessary permissions to reassign ownership)
leads to this error:
Error applying plan:
1 error(s) occurred:
* postgresql_database.database (destroy): 1 error(s) occurred:
* postgresql_database.database: Error dropping database: pq: must be owner of database debug_db1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Using local-exec provisioners I was able to grant the role that owned the database to the admin user and the application user:
resource "aws_db_instance" "database" {
...
}
provider "postgresql" {
host = "${aws_db_instance.database.address}"
port = 5432
username = "myadminuser"
password = "adminpassword"
sslmode = "require"
connect_timeout = 15
}
resource "postgresql_role" "app" {
name = "app"
login = true
password = "apppassword"
skip_reassign_owned = true
}
resource "postgresql_role" "group" {
name = "${postgresql_role.app.name}_group"
skip_reassign_owned = true
provisioner "local-exec" {
command = "PGPASSWORD=adminpassword psql -h ${aws_db_instance.database.address} -U myadminuser postgres -c 'GRANT ${self.name} TO myadminuser, ${postgresql_role.app.name};'"
}
}
resource "postgresql_database" "database" {
name = "mydatabase"
owner = "${postgresql_role.group.name}"
}
which seems to work compared to setting ownership only for the app user. I do wonder if there's a better way I can do this without having to shell out in a local-exec though?
After raising this question I managed to raise a pull request with the fix that was released in in version 0.1.1 of the Postgresql provider so now works fine in the latest release of the provider.