# in password and Airflow postgres connection is failing - postgresql

In my Postgres password , there is a # .Something like dba#123
in the airflow.cfg I have specified my DB password as
#sql_alchemy_conn = postgresql+psycopg2://user:dba#123#postgresserver.com:5432/airflow
throwing error as
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "123#postgresserver.com" to address: Name or service not known
I tried to specify the password as parameters to the postgresql
sql_alchemy_conn = postgresql+psycopg2://user:dba#123#postgresserver.com:5432/airflow?password=dba#123
but not working .
Can any one help

Maybe this can help you can set the DB properties as an Environment variable and then you can get them via function, like this you will not get an error.
# def db_props():
# db_config = {
# 'host': os.environ["_HOST"],
# 'port': os.environ["_PORT"],
# 'db': os.environ["_DATABASE"],
# 'username': os.environ["_USERNAME"],
# 'password': os.environ["_PASSWORD"]
# }
# return db_config
and later in code, you can do this while making a connection
db_config = db_props()
server = db_config['host']
port = db_config['port']
database = db_config['db']
username = db_config["username"]
password = db_config['password']

Related

How can I add a custom variable to Sqitch, to be used in target postgres

I would like to add few variables:"username' and "database" in my sqitch.conf on a defined target.
file sqitch.conf=>
engine = pg
[core "variables"]
username = jv_root
database = test
[target "dev_1"]
uri = db:pg://username#sqlhost:5432/database
[target "dev_2"]
uri = db:pg://username#sqlhost2:5432/database
where I run:
sqitch deploy -t dev_1
it throw an error =>
ERROR: no such user: username
You can add environment specific variables like this.
[target.dev_1.variables]
username = jv_root
password = test
How you address them in your sql files depends on the sql dialect.

AWS Terraform postgresql provider: SSL is not enabled on the server

I am trying to create a database in the created postgres RDS in AWS with postgresql provider. The terraform script i have created is as following:
resource "aws_db_instance" "test_rds" {
allocated_storage = "" # gigabytes
backup_retention_period = 7 # in days
engine = ""
engine_version = ""
identifier = ""
instance_class = ""
multi_az = ""
name = ""
username = ""
password = ""
port = ""
publicly_accessible = "false"
storage_encrypted = "false"
storage_type = ""
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.name}"
}
The postgresql provider is as following:
# Create databases in rds
provider "postgresql" {
alias = "alias"
host = "${aws_db_instance.test_rds.address}"
port = 5432
username =
password =
database =
sslmode = "disable"
}
# Create user in rds
resource "postgresql_role" "test_role" {
name =
replication = true
login = true
password =
}
# Create database rds
resource "postgresql_database" "test_db" {
name = testdb
owner = "${postgresql_role.test_role.name}"
lc_collate = "C"
allow_connections = true
provider = "postgresql.alias"
}
Anyway i keep retrieving
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: pq: SSL is not enabled on the server
Note: the empty fields are already filled and the RDS is successfully created, the problem rises when trying to create the database in the rds with the postgresql provider.
We ran into this issue as well, and the problem was that the password was not defined. It seems that we will get the SSL is not enabled error when it has problems connecting. We also had the same problem when the db host name was missing. You will need to make sure you define all of the fields needed to connect in Terraform (probably database and username too).
Ensuring there was a password set for the postgres user and disabled sslmode, did it for me
sslmode = "disable"

Cannot delete an instance of module in Terraform which contains a provider

I have a module which contains resources for:
azure postgres server
azure postgres database
postgres role (user)
postgres provider (for the server and used to create the role)
In one of my env directories I can have 0-N .tf files which is an instance of that module and each specify database name etc. So if I add another .tf file with a new name then a new database server with a database will be provisioned. All this works fine.
However, if I now delete an existing database module (one of the .tf files in my env directory) I run into issues. Terraform will now try to get the state of all the previously existing resources and since that specific provider (for that postgres server) now is gone terraform cannot get the state of the created postgres role, with the output a provider configuration block is required for all operations.
I understand why this happens but I cannot figure out how to solve this. I want to "dynamically" create (and remove) postgres servers with a database on them but this requires "dynamic" providers which then makes me get stuck on this.
Example of how it looks
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
Above you see how we, in the module create a postgres server, postgres db and also a postgres role (where only the role utilizes the postgres provider). So if I now define an instance datadb.tf such as:
module "datadb" {
source = "../../modules/postgres"
db_name = "datadb"
resource_group = "${azurerm_resource_group.resource-group.name}"
location = "${azurerm_resource_group.resource-group.location}"
}
then it will be provisioned successfully. The issue is if I later on delete that same file (datadb.tf) then the planning fails because it will try to get the state of the postgres role without having the postgres provider present.
The postgres provider is only needed for the postgres role which will be destroyed as soon as the azure provider destroys the postgres db and postgres server, so the actual removal of that role is not necessary. Is there a way to tell terraform that "if this resource should be removed, you don't have to do anything because it will be removed dependent on being removed"? Or does anyone see any other solutions?
I hope my goal and issue is clear, thanks!
I think the only solution is a two-step solution, but I think it's still clean enough.
What I would do is have two files per database (name them how you want).
db-1-infra.tf
db-1-pgsql.tf
Put everything except your postgres resources in db-1-infra.tf
resource "azurerm_postgresql_server" "postgresserver" {
name = "${var.db_name}-server"
location = "${var.location}"
resource_group_name = "${var.resource_group}"
sku = ["${var.vmSize}"]
storage_profile = ["${var.storage}"]
administrator_login = "psqladminun"
administrator_login_password = "${random_string.db-password.result}"
version = "${var.postgres_version}"
ssl_enforcement = "Disabled"
}
provider "postgresql" {
version = "0.1.0"
host = "${azurerm_postgresql_server.postgresserver.fqdn}"
port = 5432
database = "postgres"
username = "${azurerm_postgresql_server.postgresserver.administrator_login}#${azurerm_postgresql_server.postgresserver.name}".
password = "${azurerm_postgresql_server.postgresserver.administrator_login_password}"
}
resource "azurerm_postgresql_database" "db" {
name = "${var.db_name}"
resource_group_name = "${var.resource_group}"
server_name = "${azurerm_postgresql_server.postgresserver.name}"
charset = "UTF8"
collation = "English_United States.1252"
}
Put your PostgreSQL resources in db-1-pgsql.tf
resource "postgresql_role" "role" {
name = "${random_string.user.result}"
login = true
connection_limit = 100
password = "${random_string.pass.result}"
create_role = true
create_database = true
depends_on = ["azurerm_postgresql_database.db"]
}
When you want to get rid of your database, first delete the file db-1-pgsql.tf and apply. Next, delete db-1-infra.tf and apply again.
The first step will destroy all postgres resources and free you up to run the second step, which will remove the postgres provider for that database.

how to avoid having mongodb as default datasource when working with multiple datasources in grails 3

I have my application.groovy set up as :
environments {
development {
mongo {
host = 'localhost'
port = 27107
username = dbusername
password = dbpassword
databaseName = dbname
}
dataSources {
dataSource {
pooled = true
jmxExport = true
driverClassName = 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
dbCreate = ''
username = dbusername
password = dbpassword
url = 'jdbc:sqlserver://${dbserver}:${dbport};databaseName=${dbname}'
}
}
}
}
But now it seems like all of my domain's data source points to the mongodb so I can no longer query my domains that are linked to mssql db. How can I avoid this?
Secondary question though not that important: The mongodb plugin documentation says to put the connection config within the environment->development - I wonder why we can't put it within dataSources so its much neater(in domain I can just point to the dataSource). I tried to move the config within dataSources and it didn't work!
In the debugger if I run MyDomain.list() and I get
result = {MongoQuery$MongoResultList#12334} size = 0
Any help will be much appreciated, thanks in advance
Dee
I was trying to use the "mongodb" plugin, I am not sur eif its supported in grails 3. I have things working with gmongo. I added these two dependencies in by build.gradle :
compile "org.mongodb:mongo-java-driver:3.0.2"
compile "com.gmongo:gmongo:1.5"
and removed the mongo config.
environments {
development {
mongo {
host = 'localhost'
port = 27107
username = dbusername
password = dbpassword
databaseName = dbname
}
....
}
}
gmongo seems to take default database credentials. This is how I created db instance to work off of it:
def mongo = new GMongo()
def db = mongo.getDB("dnName")
Hope this helps someone facing similar problem.

How to copy password protected database from remote server in mongodb?

I am running mongodb 2.4.8 and need to copy a database from remote server. Server has auth enabled in combination with a user having privileges the database. I have tried copydb but it didn't work. I guess it failed because of remote server using auth in combination with role based user(mentioned under authentication section of documentation).
host = "myhost.com"
mynonce = db.runCommand( { copydbgetnonce : 1, fromhost: host } ).nonce
username = "myuser"
password = "mypassword"
password_hash = hex_md5(mynonce + username + hex_md5(username + ":mongo:" + password))
db.runCommand({
copydb: 1,
fromdb: "test",
todb: "test",
fromhost: host,
username: username,
key: password_hash
})
# output: { "ok" : 0, "errmsg" : "" }
# but nothing really gets copied
What other options do I have? I would prefer a solution which can work from within the mongo shell as I do not have ssh access to server.
try db.copyDatabase(fromdb, todb, fromhost, username, password). as manual said: http://docs.mongodb.org/manual/reference/method/db.copyDatabase/