Gradle Liquibase SSH tunnel creating to Aurora Postgresql - postgresql

need your help,
I need to connect to AWS Aurora Postgresql using liquibase, it's already configured for local machine, and works fine, but have issues with ssh configuration to it.
I'm using id 'org.hidetake.ssh' version '2.10.1', and id 'org.liquibase.gradle' version '2.0.4'
I'm able to run command directly on host machine, like getting date execute ('date') below, but have no idea why liquibase fails with
Unexpected error running Liquibase: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: Connection could not be created to jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres with driver org.postgresql.Driver. The connection attempt failed.
here is my build.gradle setting:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")}
remotes {
dev {
host = 'xxx.xxx.xxx.xxx'
port = 22
user = 'ec2-user'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")
}
}
ssh.run {
session(remotes.dev) {
forwardLocalPort port: 5432, hostPort: 5432
execute ('date')
liquibase {
activities {
main {
//changeLogFile changeLog
url 'jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres'
username feedSqlUserDev
password feedSqlUserPasswordDev
logLevel 'debug'
}
}
}
}
}
Could you please help me with it, what am I doing wrong?

Also had to connect to SSH bastion host before running liquibase updates. My solution is based on https://github.com/int128/gradle-ssh-plugin/issues/246 answer by the plugin author.
Here is my setup:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/.ssh/id_rsa")
}
remotes {
bastion {
host = '<hostname>'
user = '<username>'
}
}
liquibase {
activities {
main {
changeLogFile '...'
url 'jdbc:postgresql://localhost:5438/***'
username '***'
password '***'
driver 'org.postgresql.Driver'
}
}
}
task('sshTunnelStart') {
doFirst {
project.ext.ready = new CountDownLatch(1)
project.ext.done = new CountDownLatch(1)
Thread.start {
ssh.run {
session(remotes.bastion) {
forwardLocalPort port: 5438,
host: '<real db hostname>',
hostPort: 5432
project.ready.countDown()
project.done.await(5, TimeUnit.MINUTES) // liquibase update timeout
}
}
}
ready.await(10, TimeUnit.SECONDS) // start tunnel timeout
}
}
task('sshTunnelStop') {
doLast {
// teardown tunnel
project.done.countDown()
}
}
update.dependsOn(sshTunnelStart)
update.finalizedBy(sshTunnelStop)
Note that in liquibase config I use localhost:5438 as it is a local port forwarded to the remote. Later the same port is used in forwardLocalPort as a 'port' parameter. 'host' parameter is set to the remote database host, and 'hostPort' is accordingly the database port. The last part of the config adds dependencies between tasks to liquibase update and start/stop the tunnel.

Related

Unable to connect to postgres using deno.js

Unable to connect to postgres in deno.js.
Here is the configuration:
const dbCreds = {
applicationName: "appname",
user: "user_sfhjwre",
database: "d9iu8mve7nen",
password: "68790f31eelkhlashdlkagsvADSDa52f9d8faed894c037ef6f9c9f09885603",
hostname: "ec2-345-34-97-212.eu-east-1.xx.amazonaws.com",
port: 5432,
};
export { dbCreds };
Usage:
import { Client } from "https://deno.land/x/postgres/mod.ts";
import { dbCreds } from "../config.ts";
const client = new Client(dbCreds);
await client.connect();
Also tried:
config = "postgres://user#localhost:5432/test?application_name=my_custom_app";
const client = new Client(config);
await client.connect();
Same result:
Uncaught Error: Unknown auth message code 1397113172
Is there anything wrong with the syntax, I can connect to the same db using prisma.
I have the PostgreSQL server in a remote server and, each time my public IP changes, I need to change the pg_hba.conf in order to set my new public IP as authorized for remote access.
Hope this helps.
Best regards.

Create schema for Google Cloud SQL PostgreSQL database using Terraform

I'm new to Terraform, and I want to create a schema for the postgres database created on a PostgreSQL 9.6 instance on Google cloud SQL.
To create the PostgreSQL instance I have this on main.tf:
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
}
}
}
The I was trying to create a PostgreSQL object like this:
provider "postgresql" {
host = "${google_sql_database_instance.my-database.ip_address}"
username = "postgres"
}
Finally creating the schema:
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "postgres"
}
However, this configurations do not work, we I run terraform plan:
Inappropriate value for attribute "host": string required.
If I remove the Postgres object:
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
Additionally, I would like to add a password for the user postgres which is created by default when the PostgreSQL instance is created.
EDITED:
versions used
Terraform v0.12.10
+ provider.google v2.17.0
+ provider.postgresql v1.2.0
Any suggestions?
There are a few issues with the terraform set up that you have above.
Your instance does not have any authorized networks defined. You should change your instance resource to look like this: (Note: I used 0.0.0.0/0 just for testing purposes)
resource "google_sql_database_instance" "my-database" {
name = "my-${var.deployment_name}"
database_version = "POSTGRES_9_6"
region = "${var.deployment_region}"
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "all"
value = "0.0.0.0/0"
}
}
}
depends_on = [
"google_project_services.vpc"
]
}
As mentioned here, you need to create a user with a strong password
resource "google_sql_user" "user" {
name = "test_user"
instance = "${google_sql_database_instance.my-database.name}"
password = "VeryStrongPassword"
depends_on = [
"google_sql_database_instance.my-database"
]
}
You should use the "public_ip_address" or "ip_address.0.ip_address" attribute of your instance to access the ip address. Also, you should update your provider and schema resource to reflect the user created above.
provider "postgresql" {
host = "${google_sql_database_instance.my-database.public_ip_address}"
username = "${google_sql_user.user.name}"
password = "${google_sql_user.user.password}"
}
resource "postgresql_schema" "my_schema" {
name = "my_schema"
owner = "test_user"
}
Your postgres provider is dependent on the google_sql_database_instance resource to be done before it is able to set up the provider:
All the providers are initialized at the beginning of plan/apply so if one has an invalid config (in this case an empty host) then Terraform will fail.
There is no way to define the dependency between a provider and a
resource within another provider.
There is however a workaround by using the target parameter
terraform apply -target=google_sql_user.user
This will create the database user (as well as all its dependencies - in this case the database instance) and once that completes follow it with:
terraform apply
This should then succeed as the instance has already been created and the ip_address is available to be used by the postgres provider.
Final Note: Usage of public ip addresses without SSL to connect to Cloud SQL instances is not recommended for production instances.
This was my solution, and this way I just need to run: terraform apply :
// POSTGRESQL INSTANCE
resource "google_sql_database_instance" "my-database" {
database_version = "POSTGRES_9_6"
region = var.deployment_region
settings {
tier = var.db_machine_type
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "my_ip"
value = var.db_allowed_networks.my_network_ip
}
}
}
}
// DATABASE USER
resource "google_sql_user" "user" {
name = var.db_credentials.db_user
instance = google_sql_database_instance.my-database.name
password = var.db_credentials.db_password
depends_on = [
"google_sql_database_instance.my-database"
]
provisioner "local-exec" {
command = "psql postgresql://${google_sql_user.user.name}:${google_sql_user.user.password}#${google_sql_database_instance.my-database.public_ip_address}/postgres -c \"CREATE SCHEMA myschema;\""
}
}

How can I configure a play-slick-db connection with ssh config?

This is my current db connection in play slick
slick.dbs.default {
driver="utils.db.PostgresDriver$"
db {
driver = org.postgresql.Driver
url = "jdbc:postgresql://127.0.0.1/testdb"
user = "root"
password = ""
keepAliveConnection = true
}
}
My question is, how will I connect to a remote db that requires ssh authentication?

how to avoid having mongodb as default datasource when working with multiple datasources in grails 3

I have my application.groovy set up as :
environments {
development {
mongo {
host = 'localhost'
port = 27107
username = dbusername
password = dbpassword
databaseName = dbname
}
dataSources {
dataSource {
pooled = true
jmxExport = true
driverClassName = 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
dbCreate = ''
username = dbusername
password = dbpassword
url = 'jdbc:sqlserver://${dbserver}:${dbport};databaseName=${dbname}'
}
}
}
}
But now it seems like all of my domain's data source points to the mongodb so I can no longer query my domains that are linked to mssql db. How can I avoid this?
Secondary question though not that important: The mongodb plugin documentation says to put the connection config within the environment->development - I wonder why we can't put it within dataSources so its much neater(in domain I can just point to the dataSource). I tried to move the config within dataSources and it didn't work!
In the debugger if I run MyDomain.list() and I get
result = {MongoQuery$MongoResultList#12334} size = 0
Any help will be much appreciated, thanks in advance
Dee
I was trying to use the "mongodb" plugin, I am not sur eif its supported in grails 3. I have things working with gmongo. I added these two dependencies in by build.gradle :
compile "org.mongodb:mongo-java-driver:3.0.2"
compile "com.gmongo:gmongo:1.5"
and removed the mongo config.
environments {
development {
mongo {
host = 'localhost'
port = 27107
username = dbusername
password = dbpassword
databaseName = dbname
}
....
}
}
gmongo seems to take default database credentials. This is how I created db instance to work off of it:
def mongo = new GMongo()
def db = mongo.getDB("dnName")
Hope this helps someone facing similar problem.

Cargo plugin for Gradle ignoring configured port

I have the following configuration :
cargo {
containerId = deployContainerId
port = jbossManagementPort
deployable {
file = tasks.getByPath(':frontend:war').archivePath
context = 'xxxxxx'
}
remote {
hostname = 'localhost'
username = 'xxxxxxx'
password = 'xxxxxxx'
}
local {
homeDir = file(jbossHome)
timeout = 60000
}
}
When I invoke Gradle with
gradle -PjbossManagementPort=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
The configured port is ignored. It still tries to connect to 9999. I have tried variants, such as
gradle -Pcargo.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
And
gradle -Pcargo.jboss.management-native.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
But neither has any effect.
How do I tell Cargo to use a different port than the default?
The solution is to use -D for the cargo-property rather than -P:
gradle -Dcargo.jboss.management-native.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
A possible solution that you can define in your gradle build to handle this issue.
remote {
//You can define custom cargo properties here
containerProperties {
property 'cargo.jboss.management-native.port', 12345
}
hostname = 'localhost'
username = 'xxxxxxx'
password = 'xxxxxxx'
}