I am trying to create a database in the created postgres RDS in AWS with postgresql provider. The terraform script i have created is as following:
resource "aws_db_instance" "test_rds" {
allocated_storage = "" # gigabytes
backup_retention_period = 7 # in days
engine = ""
engine_version = ""
identifier = ""
instance_class = ""
multi_az = ""
name = ""
username = ""
password = ""
port = ""
publicly_accessible = "false"
storage_encrypted = "false"
storage_type = ""
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.name}"
}
The postgresql provider is as following:
# Create databases in rds
provider "postgresql" {
alias = "alias"
host = "${aws_db_instance.test_rds.address}"
port = 5432
username =
password =
database =
sslmode = "disable"
}
# Create user in rds
resource "postgresql_role" "test_role" {
name =
replication = true
login = true
password =
}
# Create database rds
resource "postgresql_database" "test_db" {
name = testdb
owner = "${postgresql_role.test_role.name}"
lc_collate = "C"
allow_connections = true
provider = "postgresql.alias"
}
Anyway i keep retrieving
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: pq: SSL is not enabled on the server
Note: the empty fields are already filled and the RDS is successfully created, the problem rises when trying to create the database in the rds with the postgresql provider.
We ran into this issue as well, and the problem was that the password was not defined. It seems that we will get the SSL is not enabled error when it has problems connecting. We also had the same problem when the db host name was missing. You will need to make sure you define all of the fields needed to connect in Terraform (probably database and username too).
Ensuring there was a password set for the postgres user and disabled sslmode, did it for me
sslmode = "disable"
Related
I'm trying to provision the postgres in the aws also create the database and roles sequentially using the terraform.
But getting the below exception and i could not able to create the role/db.
terraform {
required_providers {
# postgresql = {
# source = "cyrilgdn/postgresql"
# version = "1.15.0"
# }
postgresql = {
source = "terraform-providers/postgresql"
version = ">=1.7.2"
}
helm = {
source = "hashicorp/helm"
version = "2.4.1"
}
aws = {
source = "hashicorp/aws"
version = "4.0.0"
}
}
}
resource "aws_db_instance" "database" {
identifier = "dev-test"
allocated_storage = 100
storage_type = "gp2"
engine = "postgres"
engine_version = "13.4"
port = 5432
instance_class = "db.t3.micro"
username = "postgres"
performance_insights_enabled = true
password = "postgres$123"
db_subnet_group_name = "some_name"
vpc_security_group_ids = ["sg_name"]
parameter_group_name = "default.postgres13"
publicly_accessible = true
delete_automated_backups = false
storage_encrypted = true
tags = {
Name = "dev-test"
}
skip_final_snapshot = true
}
#To create the "raw" database
provider "postgresql" {
version = ">=1.4.0"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
#creation of the role
resource "postgresql_role" "application_role" {
provider = postgresql
name = "test"
login = true
password = "test$123"
encrypted_password = true
create_database = false
depends_on = [aws_db_instance.database]
}
Error -
Error: dial tcp 18.221.183.66:5432: i/o timeout
│
│ with postgresql_role.application_role,
│ on main.tf line 79, in resource "postgresql_role" "application_role":
│ 79: resource "postgresql_role" "application_role" {
│
╵
I noticed few people are saying to include the expected_version attribute in the latest version should work.
Although including the expected version attribute still the issue persist.
I need to provision the postgres in the aws, create the db and roles.
What could be issue with my script ?
As per documentation [1], you are missing the scheme in the postgresql provider:
provider "postgresql" {
scheme = "awspostgres"
database = "raw"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.username
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}
Additionally, I am not sure if you can use database = raw or it has to be database = "postgres", which is the default value so it does not have to be specified.
One other note: I do not think you need to specify the provider block in every resource. You just define it once in the required_providers block (like you did for aws provider) and then anything related to that provider will assume using the provider defined. In other words, you should remove the version = ">=1.4.0" from the provider "postgres" and provider = postgresql from the resource "postgresql_role" "application_role" and the code should still work.
[1] https://registry.terraform.io/providers/cyrilgdn/postgresql/latest/docs#aws
I have a somewhat basic understanding of Cloud Architecture and thought I would try to spin up a PostgreSQL DB in Terraform. I am using Secret Manager to store credentials...
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
locals {
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.credentials.secret_string)
}
And an AuoraDB instance which should be publically accessible with the following code
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
When I terraform apply this, everything gets created as expected, but when I run psql -h <ENDPOINT_TO_CLUSTER> I get prompted to enter the password for admin. Going to the secrets portal copying the password and entering yields:
FATAL: password authentication failed for user "admin"
Similarily, if I try:
psql --username=db_user --host=<ENDPOINT_TO_CLUSTER> --port=5432
I am prompted as expected, to enter the password for db_user, which yields:
psql: FATAL: database "db_user" does not exist
Edit 1
secrets.tf
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
database.tf
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = "db_user"
master_password = random_password.password.result
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
You're doing a data lookup named data.aws_secretsmanager_secret_version.credentials but you don't show the Terraform code for that. Terraform is going to do that lookup before it updates the aws_secretsmanager_secret_version. So the username and password it is configuring the DB with is going to be pulled from the previous version of the secret, not the new version you are creating when you run apply.
You should never have both a data and a resource in your Terraform that refer to the same thing. Always use the resource if you have it, and only use data for things that aren't being managed by Terraform.
Since you have the resource itself available in your Terraform code (and also the random_password resource), you shouldn't be using a data lookup at all. If you pull the value from one of the resources, then Terraform will handle the order of creation/updates correctly.
For example:
locals {
db_credentials = jsondecode(aws_secretsmanager_secret_version.version.secret_string)
}
resource "aws_rds_cluster" "cluster-demo" {
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
Or just simplify it and get rid of the jsondecode step:
resource "aws_rds_cluster" "cluster-demo" {
master_username = "db_user"
master_password = random_password.password.result
I also suggest adding a few Terraform outputs to help you diagnose this type of issue. The following will let you see exactly what username and password Terraform applied to the database:
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
output "db_password" {
value = aws_rds_cluster.cluster-demo.master_password
sensitive = true
}
I am trying to create a PostgreSQL RDS instance using Terraform.
Here is how my configuration looks:
resource "aws_db_subnet_group" "postgres" {
name = "postgres-subnets"
subnet_ids = ["mysub1","mysub2"]
}
resource "aws_db_instance" "myrds" {
engine = "postgresql"
engine_version = "12.4"
instance_class = "db.t2.micro"
identifier = "myrds"
username = "myuser"
password = "*******"
allocated_storage = 10
storage_type = "gp2"
db_subnet_group_name = "${aws_db_subnet_group.postgres.id}"
}
It fails with following error:
Error: Error creating DB Instance: InvalidParameterValue: Invalid DB engine
Terraform documentation needs to add the engine names which are supported:
engine = "postgresql" is incorrect. Supported value is "postgres"
Hi I’m importing a resource but it’s failing. I’m not sure what the issue is. Can someone point me how to fix this error.
I tried by setting sslmode = "require", got the same error.
my ssl is on in database and force.ssl is off
Terraform v0.12.20
provider.aws v2.58.0
provider.postgresql v1.5.0
Your version of Terraform is out of date! The latest version
My module:
locals.tf:
pgauth_dbs = var.env == "prod" ? var.prod_dbs : var.stage_dbs
variables.tf
variable "stage_dbs" {
type = list(string)
default = ["host_configs", "staging", "staging_preview"]
}
Provider
provider "postgresql" {
version = ">1.4.0"
alias = "pg1"
host = aws_db_instance.name.address
port = aws_db_instance.name.port
username = var.username
password = var.master_password
expected_version = aws_db_instance.name.engine_version
sslmode = "disable"
connect_timeout = 15
}
module:
resource "postgresql_database" "pgauth_dbs" {
provider = postgresql.pg1
for_each = toset(local.pgauth_dbs)
name = each.value
owner = "postgres"
}
Root-Module:
module rds {
source = ../../../../tf_module_rds
username = "postgres"
master_password = data.aws_kms_secrets.secrets_password.plaintext["password"]
engine_version = "11.5"
instance_class = "db.m5.xlarge"
allocated_storage = "300"
storage_type = "gp2"
}
terraform import module.rds.postgresql_database.name_dbs[“host_configs”] host_configs
module.rds.postgresql_database.name_dbs[“host_configs”]: Importing from ID “host_configs”…
module.rds.postgresql_database.name_dbs[“host_configs”]: Import prepared!
Prepared postgresql_database for import
module.rds.postgresql_database.name_dbs[\“host_configs”\]: Refreshing state… [id=host_configs]
Error: could not start transaction: pq: no PostgreSQL user name specified in startup packet
Provider should point to instance's username and not variable
provider "postgresql" {
version = ">1.4.0"
alias = "pg1"
host = aws_db_instance.name.address
port = aws_db_instance.name.port
username = aws_db_instance.name.username
password = aws_db_instance.name.password
database = aws_db_instance.name.name
expected_version = aws_db_instance.name.engine_version
sslmode = "disable"
connect_timeout = 15
}
I have a module which contains resources for:
azure postgres server
azure postgres database
postgres role (user)
postgres provider (for the server and used to create the role)
In one of my env directories I can have 0-N .tf files which is an instance of that module and each specify database name etc. So if I add another .tf file with a new name then a new database server with a database will be provisioned. All this works fine.
However, if I now delete an existing database module (one of the .tf files in my env directory) I run into issues. Terraform will now try to get the state of all the previously existing resources and since that specific provider (for that postgres server) now is gone terraform cannot get the state of the created postgres role, with the output a provider configuration block is required for all operations.
I understand why this happens but I cannot figure out how to solve this. I want to "dynamically" create (and remove) postgres servers with a database on them but this requires "dynamic" providers which then makes me get stuck on this.
Example of how it looks
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
Above you see how we, in the module create a postgres server, postgres db and also a postgres role (where only the role utilizes the postgres provider). So if I now define an instance datadb.tf such as:
module "datadb" {
source = "../../modules/postgres"
db_name = "datadb"
resource_group = "${azurerm_resource_group.resource-group.name}"
location = "${azurerm_resource_group.resource-group.location}"
}
then it will be provisioned successfully. The issue is if I later on delete that same file (datadb.tf) then the planning fails because it will try to get the state of the postgres role without having the postgres provider present.
The postgres provider is only needed for the postgres role which will be destroyed as soon as the azure provider destroys the postgres db and postgres server, so the actual removal of that role is not necessary. Is there a way to tell terraform that "if this resource should be removed, you don't have to do anything because it will be removed dependent on being removed"? Or does anyone see any other solutions?
I hope my goal and issue is clear, thanks!
I think the only solution is a two-step solution, but I think it's still clean enough.
What I would do is have two files per database (name them how you want).
db-1-infra.tf
db-1-pgsql.tf
Put everything except your postgres resources in db-1-infra.tf
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
Put your PostgreSQL resources in db-1-pgsql.tf
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
When you want to get rid of your database, first delete the file db-1-pgsql.tf and apply. Next, delete db-1-infra.tf and apply again.
The first step will destroy all postgres resources and free you up to run the second step, which will remove the postgres provider for that database.