Npgsql Provide Client Certificate and Key - postgresql

I am trying to formulate a connection to a PGSQL server that requires both a client certificate and key to operate.
First, I can verify connections to the Postgres database using SQLGate work. Provide host, User, password, port, database and mark Use SSL, then under SSL provide the Certificate and Key. The connection does not operate without either of those items. Using NPGSQL, I provide all but the key as for some reason NpgsqlConnectionStringBuilder does not contain a definition for some sort of client key.
var connectionString = new NpgsqlConnectionStringBuilder();
connectionString.Host = rInfo.Host;
int portNumber = 5432;
int.TryParse(rInfo.Port, out portNumber);
connectionString.Port = portNumber;
connectionString.Database = rInfo.dbName;
connectionString.Username = rInfo.Username;
connectionString.Password = rInfo.Password;
connectionString.SslMode = SslMode.Prefer;
connectionString.TrustServerCertificate = true;
connectionString.ClientCertificate = rInfo.CertFilePath;
//Poke the database, see if we can get in.
try
{
NpgsqlConnection npgsqlConnection = new NpgsqlConnection(connectionString.ToString());
npgsqlConnection.ProvideClientCertificatesCallback += provideCertificates;
npgsqlConnection.UserCertificateValidationCallback += validateCertificates;
npgsqlConnection.Open();
npgsqlConnection.Close();
return connectionString.ToString();
}
The exception is:
Error 28000 : connection requires a valid client certificate
Which is to be expected since I'm not providing the key anywhere. I have tried forcing the key to be added to the connection string via guessing:
connectionString.Add(new KeyValuePair<string, object?>("Client Key", rInfo.KeyFilePath));
But that's unrecognized. Libpq's PG Connect documentation labels it as sslkey, but that comes back as unrecognized as well. My best guess is using ProvideClientCertificatesCallback callback to provide the certificate, but I don't know how to have it pair with a key since it's just asking for an X509CertificateCollection.
The previous tool we were using was provided by Devart, but we have lost the license. We also will be connecting to a range of databases (with the same schema) instead of just one.
What are my options?

Related

Terraform issue in deleting server parameter Postgresql flexi server

Getting error:waiting for deletion of Flexible Server Configuration (pgbouncer.enabled)
Once I change the configuration from portal the script works.
azurerm_postgresql_flexible_server_configuration" "pgbouncer"
{ name = "pgbouncer.enabled"
value = var.pgbouncer
server_id = azurerm_postgresql_flexible_server.pgsql.id
}

(PostgreSQL) 12.7 (Ubuntu 12.7-0ubuntu0.20.04.1) can't connect to the database?

This is on WSl 2 following the instructions from the official documentation
I created a simple postgresql and I try to connect to it like so:
const Sequelize = require("sequelize");
const sequelize = new Sequelize('postgres://postgres:w#localhost:5432/messenger', {
logging: false,
dialect: 'postgre'
});
async function test(){
try {
await sequelize.authenticate();
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
}
test();
This will tell me if the connection was successful or not, but for some reason, I keep getting this error. The URI string seems correct, the only user is the default created Postgres where I changed the password to be "w" for testing purposes.
Not sure what that 12/main part is about but the server is online so I am really not sure what the problem is.
Either you specified the wrong password for the database user "postgres", or your server isn't configured to allow password authentication on localhost. You can see this link: https://www.postgresql.org/docs/current/auth-methods.html for more information on the authentication methods postgresql supports, or this one: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html for information on how to set up authentication on your postgresql server.

The first part before dot operator of the PostgreSQL service host name is not validated by the IBM PostgreSQL cloud service

We have IBM Cloud PostgreSQL instance. We are connecting to it using node-postgres client or odbc client with Data Direct Driver. As per our understanding the PostgreSQL service instance should throw error when provide incorrect host while connecting. However, the instance is not throwing any error when providing incorrect host (incorrect value for the first part of the host string before dot operator).
Steps to reproduce the issue
Create IBM Cloud PostgreSQL instance. Below is the format of host string
<<1st part of host>>.<<2nd part of host>>.databases.appdomain.cloud
Connect to instance using node client using node-postgres client or odbc client with Data Direct Driver with incorrect value for the first portion(<<1st part of host>>) of the host.
It will connect successfully without any issue. If we provide incorrect value(<<2nd part of host>>.databases.appdomain.cloud) for remaining portion it throws error.
Used below code snippet for validating this scenario:
const pg = require('pg')
const connectionString = 'postgres://user:password#<<1st part of host>>.<<2nd part of host>>.databases.appdomain.cloud:31974/ibmclouddb?sslmode=verify-full'
const caCert = 'Self Signed CA Certificate'
const client = new pg.Client(
{
connectionString: connectionString,
ssl: {
ca: caCert,
rejectUnauthorized: false
}
}
)
client.connect()
client.query('select * from test_pg.char_test4').then(res => {
console.log('res.rows :::::: ', res)
}).finally(() => client.end())

Adding a Postgresql role to a database after the RDS instance was created via Terraform

I am trying to add a role to a Postgresql database after creating it in RDS via Terraform.
I have two separate modules, one creating the RDS instance, one adding the new role to it. The database address is an output of the persistence module and an input of the persistenceApplicationRole module. The problem seems to be that the Postgresql provider is ran before the RDS instance is created, so the address is empty.
The error I am getting is:
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: dial tcp :5432: connect: connection refused
on ../modules/persistenceApplicationRole/main.tf line 9, in provider "postgresql":
9: provider postgresql {
Running the modules separately via the -target=module.persistence flag works, since persistenceApplicationRole picks up the database address once it is created.
I have found an example with this exact scenario for the MySQL Provider in their documentation here.
# module.persistenceApplicationRole
provider postgresql {
host = var.databaseAddress
username = data.external.root_credentials.result["username"]
password = data.external.root_credentials.result["password"]
superuser = false
}
resource "postgresql_role" "application_role" {
name = data.external.application_credentials.result["username"]
password = data.external.application_credentials.result["password"]
login = true
encrypted_password = true
skip_reassign_owned = true
skip_drop_role = true
}
The 1.4.0 release of the Postgresql provider added expected_version which you can use to avoid the feature detection at plan time that attempts to connect to the database. This was introduced in the 0.1.1 release a while back and broke people being able to create the underlying instance and configure the database at the same time.
To use the expected_version you would do something like this:
provider postgresql {
host = var.databaseAddress
username = data.external.root_credentials.result["username"]
password = data.external.root_credentials.result["password"]
superuser = false
expected_version = "10.1"
}
The more common use case would be creating an RDS instance or something else and interpolating that across:
resource "aws_db_instance" "database" {
# ...
}
provider "postgresql" {
version = ">=1.4.0"
host = aws_db_instance.database.address
port = aws_db_instance.database.port
username = aws_db_instance.database.user
password = aws_db_instance.database.password
sslmode = "require"
connect_timeout = 15
superuser = false
expected_version = aws_db_instance.database.engine_version
}

VOMongoRepository fails to connect to MongoDB replicaset with user credentials (Pharo/Voyage)

I am trying to save a root Object (MyDocument) into a mongoDB with authentication enabled and a ReplicaSet consisting of 3 Nodes (as inserted into mongoUrls)
With this call:
(VOMongoRepository
mongoUrls: {'127.0.0.1:27017' . '127.0.0.1:27018' . '127.0.0.1:27019'}
database: 'myDB'
username: 'myUser'
password: 'myPass') enableReplication
I receive a VOMongoConnectionError without any deeper information.
Trying the same with this:
VOMongoRepository
mongoUrls: {'myUser:myPass#127.0.0.1:27017/?replicaSet=myRepl' }
database: 'myDB'
I then receive a VOMongoError "not authorized for Query on myDB.MyDocument"
The credentials are double checked with mongo client and Compass, also the read/write permissions (actually the role is dbOwner).
Interestingly my testDocumentLifeCycle is able to create the object and to send a message to save, that returns without signaling an error, although it does not create the document in MongoDB. But the selectOne: is then returning the VOMongoError:
| doc |
MyDocument new
identity: 'me#there.com';
save.
user := MyDocument selectOne: [ :each | each identity = 'me#there.com'].
Just to mention: the above test for MyDocument class did work with a standalone mongod without authentication enabled. The only thing changed is the repository.
So what am I doing wrong?
Actually there is a bug in the replicaSet part of VoyageMongo. It is not using the credentials provided. It has been posted at https://github.com/pharo-nosql/voyage/issues/104