How to provision RDS postgres db users with AWS IAM auth using terraform? - postgresql

By checking this AWS blog: https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/ I noticed that I need to create a DB user after login with the master username and password:
CREATE USER {dbusername} IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';
I can see terraform has mysql_user to provision mysql db users: https://www.terraform.io/docs/providers/mysql/r/user.html
However, I couldn't find postgres_user. Is there a way to provision postgres user with IAM auth?

In Postgres, a user is called a "role". The Postgres docs say:
a role can be considered a "user", a "group", or both depending on how it is used
So, the TF resource to create is a postgresql_role
resource "postgresql_role" "my_replication_role" {
name = "replication_role"
replication = true
login = true
connection_limit = 5
password = "md5c98cbfeb6a347a47eb8e96cfb4c4b890"
}
To enable IAM user to assume the role, follow the steps in the AWS docs.
From those instructions, you would end up with TF code looking something like:
module "db" {
source = "terraform-aws-modules/rds/aws"
// ...
}
provider "postgresql" {
// ...
}
resource "postgresql_role" "pguser" {
login = true
name = var.pg_username
password = var.pg_password
roles = ["rds_iam"]
}
resource "aws_iam_user" "pguser" {
name = var.pg_username
}
resource "aws_iam_user_policy" "pguser" {
name = var.pg_username
user = aws_iam_user.pguser.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${module.db.this_db_instance_resource_id}/${var.pg_username}"
]
}
]
}
EOF
}

Related

Can somene help me how to give the specific service principal name for azure custom policy in terraform code

The Scenario is:We need to create resources in azure portal through terraform cloud only by using service principals and azure portal shouldn't allow creation of resources by other means.
(No manual creation in azure portal,and also by SVM's tools like vs code)
The following code which i tried:
resource "azurerm_policy_definition" "policych" {
name = "PolicyTestSP"
policy_type = "Custom"
mode = "Indexed"
display_name = "Allow thru SP"
metadata = <<METADATA
{
"category": "General"
}
METADATA
policy_rule = <<POLICY_RULE
{
"if": {
"not": {
"field": "type",
"equals": "https://app.terraform.io/"
}
},
"then": {
"effect": "deny"
}
}
POLICY_RULE
}
resource "azurerm_policy_assignment" "policych" {
name = "Required servive-principles"
display_name = "Allow thru SP"
description = " Require SP Authentication'"
policy_definition_id = "${azurerm_policy_definition.policych.id}"
scope = "/subscriptions/xxxx-xxxx-xxxx-xxxx"
}

Unable to connect to RDS Aurora DB locally

I have a somewhat basic understanding of Cloud Architecture and thought I would try to spin up a PostgreSQL DB in Terraform. I am using Secret Manager to store credentials...
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
locals {
db_credentials = jsondecode(data.aws_secretsmanager_secret_version.credentials.secret_string)
}
And an AuoraDB instance which should be publically accessible with the following code
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
When I terraform apply this, everything gets created as expected, but when I run psql -h <ENDPOINT_TO_CLUSTER> I get prompted to enter the password for admin. Going to the secrets portal copying the password and entering yields:
FATAL: password authentication failed for user "admin"
Similarily, if I try:
psql --username=db_user --host=<ENDPOINT_TO_CLUSTER> --port=5432
I am prompted as expected, to enter the password for db_user, which yields:
psql: FATAL: database "db_user" does not exist
Edit 1
secrets.tf
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}
resource "aws_secretsmanager_secret" "secret" {
name = "admin"
description = "Database admin user password"
}
resource "aws_secretsmanager_secret_version" "version" {
secret_id = aws_secretsmanager_secret.secret.id
secret_string = <<EOF
{
"username": "db_user",
"password": "${random_password.password.result}"
}
EOF
}
database.tf
resource "aws_rds_cluster" "cluster-demo" {
cluster_identifier = "aurora-cluster-demo"
database_name = "test_db"
master_username = "db_user"
master_password = random_password.password.result
port = 5432
engine = "aurora-postgresql"
engine_version = "12.7"
apply_immediately = true
skip_final_snapshot = "true"
}
// child instances inherit the same config
resource "aws_rds_cluster_instance" "cluster_instance" {
identifier = "aurora-cluster-demo-instance"
cluster_identifier = aws_rds_cluster.cluster-demo.id
engine = aws_rds_cluster.cluster-demo.engine
engine_version = aws_rds_cluster.cluster-demo.engine_version
instance_class = "db.r4.large"
publicly_accessible = true # Remove
}
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
You're doing a data lookup named data.aws_secretsmanager_secret_version.credentials but you don't show the Terraform code for that. Terraform is going to do that lookup before it updates the aws_secretsmanager_secret_version. So the username and password it is configuring the DB with is going to be pulled from the previous version of the secret, not the new version you are creating when you run apply.
You should never have both a data and a resource in your Terraform that refer to the same thing. Always use the resource if you have it, and only use data for things that aren't being managed by Terraform.
Since you have the resource itself available in your Terraform code (and also the random_password resource), you shouldn't be using a data lookup at all. If you pull the value from one of the resources, then Terraform will handle the order of creation/updates correctly.
For example:
locals {
db_credentials = jsondecode(aws_secretsmanager_secret_version.version.secret_string)
}
resource "aws_rds_cluster" "cluster-demo" {
master_username = local.db_credentials["username"]
master_password = local.db_credentials["password"]
Or just simplify it and get rid of the jsondecode step:
resource "aws_rds_cluster" "cluster-demo" {
master_username = "db_user"
master_password = random_password.password.result
I also suggest adding a few Terraform outputs to help you diagnose this type of issue. The following will let you see exactly what username and password Terraform applied to the database:
output "db_user" {
value = aws_rds_cluster.cluster-demo.master_username
}
output "db_password" {
value = aws_rds_cluster.cluster-demo.master_password
sensitive = true
}

Create schema for Google Cloud SQL PostgreSQL database using Terraform

I'm new to Terraform, and I want to create a schema for the postgres database created on a PostgreSQL 9.6 instance on Google cloud SQL.
To create the PostgreSQL instance I have this on main.tf:
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
}
}
}
The I was trying to create a PostgreSQL object like this:
provider "postgresql" {
host = "${google_sql_database_instance.my-database.ip_address}"
username = "postgres"
}
Finally creating the schema:
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "postgres"
}
However, this configurations do not work, we I run terraform plan:
Inappropriate value for attribute "host": string required.
If I remove the Postgres object:
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
Additionally, I would like to add a password for the user postgres which is created by default when the PostgreSQL instance is created.
EDITED:
versions used
Terraform v0.12.10
+ provider.google v2.17.0
+ provider.postgresql v1.2.0
Any suggestions?
There are a few issues with the terraform set up that you have above.
Your instance does not have any authorized networks defined. You should change your instance resource to look like this: (Note: I used 0.0.0.0/0 just for testing purposes)
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "all"
value = "0.0.0.0/0"
}
}
}
depends_on = [
"google_project_services.vpc"
]
}
As mentioned here, you need to create a user with a strong password
resource "google_sql_user" "user" {
name = "test_user"
instance = "${google_sql_database_instance.my-database.name}"
password = "VeryStrongPassword"
depends_on = [
"google_sql_database_instance.my-database"
]
}
You should use the "public_ip_address" or "ip_address.0.ip_address" attribute of your instance to access the ip address. Also, you should update your provider and schema resource to reflect the user created above.
provider "postgresql" {
host = "${google_sql_database_instance.my-database.public_ip_address}"
username = "${google_sql_user.user.name}"
password = "${google_sql_user.user.password}"
}
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "test_user"
}
Your postgres provider is dependent on the google_sql_database_instance resource to be done before it is able to set up the provider:
All the providers are initialized at the beginning of plan/apply so if one has an invalid config (in this case an empty host) then Terraform will fail.
There is no way to define the dependency between a provider and a
resource within another provider.
There is however a workaround by using the target parameter
terraform apply -target=google_sql_user.user
This will create the database user (as well as all its dependencies - in this case the database instance) and once that completes follow it with:
terraform apply
This should then succeed as the instance has already been created and the ip_address is available to be used by the postgres provider.
Final Note: Usage of public ip addresses without SSL to connect to Cloud SQL instances is not recommended for production instances.
This was my solution, and this way I just need to run: terraform apply :
// POSTGRESQL INSTANCE
resource "google_sql_database_instance" "my-database" {
database_version = "POSTGRES_9_6"
region = var.deployment_region
settings {
tier = var.db_machine_type
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "my_ip"
value = var.db_allowed_networks.my_network_ip
}
}
}
}
// DATABASE USER
resource "google_sql_user" "user" {
name = var.db_credentials.db_user
instance = google_sql_database_instance.my-database.name
password = var.db_credentials.db_password
depends_on = [
"google_sql_database_instance.my-database"
]
provisioner "local-exec" {
command = "psql postgresql://${google_sql_user.user.name}:${google_sql_user.user.password}#${google_sql_database_instance.my-database.public_ip_address}/postgres -c \"CREATE SCHEMA myschema;\""
}
}

How to set aws cloudwatch retention via Terraform

Using Terraform to deploy API Gateway/Lambda and already have the appropriate logs in Cloudwatch. However I can't seem to find a way to set the retention on the logs via Terraform, using my currently deployed resources (below). It looks like the log group resource is where I'd do it, but not sure how to point log stream from api gateway at the new log group. I must be missing something obvious ... any advice is very much appreciated!
resource "aws_api_gateway_account" "name" {
cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
}
resource "aws_iam_role" "cloudwatch" {
name = "#{name}_APIGatewayCloudWatchLogs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "api_gateway_logs" {
name = "#{name}_api_gateway_logs_policy_attach"
roles = ["${aws_iam_role.cloudwatch.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"
}
resource "aws_api_gateway_method_settings" "name" {
rest_api_id = "${aws_api_gateway_rest_api.name.id}"
stage_name = "${aws_api_gateway_stage.name.stage_name}"
method_path = "${aws_api_gateway_resource.name.path_part}/${aws_api_gateway_method.name.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
data_trace_enabled = true
}
}
yes, you can use the Lambda log name to create log resource before you create the Lambda function. Or you can import the existing log groups.
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${var.env}-${join("", split("_",title(var.lambda_name)))}-Lambda"
retention_in_days = 7
lifecycle {
create_before_destroy = true
prevent_destroy = false
}
}

How to destroy Postgres databases owned by other users in RDS using Terraform?

I managed to nicely get Terraform creating databases and roles in an RDS Postgres database but due to the stripped down permissions of the rds_superuser I can't see an easy way to destroy the created databases that are owned by another user.
Using the following config:
resource "postgresql_role" "app" {
name = "app"
login = true
password = "foo"
skip_reassign_owned = true
}
resource "postgresql_database" "database" {
name = "app_database"
owner = "${postgresql_role.app.name}"
}
(For reference the skip_reassign_owned is required because the rds_superuser group doesn't get the necessary permissions to reassign ownership)
leads to this error:
Error applying plan:
1 error(s) occurred:
* postgresql_database.database (destroy): 1 error(s) occurred:
* postgresql_database.database: Error dropping database: pq: must be owner of database debug_db1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Using local-exec provisioners I was able to grant the role that owned the database to the admin user and the application user:
resource "aws_db_instance" "database" {
...
}
provider "postgresql" {
host = "${aws_db_instance.database.address}"
port = 5432
username = "myadminuser"
password = "adminpassword"
sslmode = "require"
connect_timeout = 15
}
resource "postgresql_role" "app" {
name = "app"
login = true
password = "apppassword"
skip_reassign_owned = true
}
resource "postgresql_role" "group" {
name = "${postgresql_role.app.name}_group"
skip_reassign_owned = true
provisioner "local-exec" {
command = "PGPASSWORD=adminpassword psql -h ${aws_db_instance.database.address} -U myadminuser postgres -c 'GRANT ${self.name} TO myadminuser, ${postgresql_role.app.name};'"
}
}
resource "postgresql_database" "database" {
name = "mydatabase"
owner = "${postgresql_role.group.name}"
}
which seems to work compared to setting ownership only for the app user. I do wonder if there's a better way I can do this without having to shell out in a local-exec though?
After raising this question I managed to raise a pull request with the fix that was released in in version 0.1.1 of the Postgresql provider so now works fine in the latest release of the provider.