The first part before dot operator of the PostgreSQL service host name is not validated by the IBM PostgreSQL cloud service - postgresql

We have IBM Cloud PostgreSQL instance. We are connecting to it using node-postgres client or odbc client with Data Direct Driver. As per our understanding the PostgreSQL service instance should throw error when provide incorrect host while connecting. However, the instance is not throwing any error when providing incorrect host (incorrect value for the first part of the host string before dot operator).
Steps to reproduce the issue
Create IBM Cloud PostgreSQL instance. Below is the format of host string
<<1st part of host>>.<<2nd part of host>>.databases.appdomain.cloud
Connect to instance using node client using node-postgres client or odbc client with Data Direct Driver with incorrect value for the first portion(<<1st part of host>>) of the host.
It will connect successfully without any issue. If we provide incorrect value(<<2nd part of host>>.databases.appdomain.cloud) for remaining portion it throws error.
Used below code snippet for validating this scenario:
const pg = require('pg')
const connectionString = 'postgres://user:password#<<1st part of host>>.<<2nd part of host>>.databases.appdomain.cloud:31974/ibmclouddb?sslmode=verify-full'
const caCert = 'Self Signed CA Certificate'
const client = new pg.Client(
{
connectionString: connectionString,
ssl: {
ca: caCert,
rejectUnauthorized: false
}
}
)
client.connect()
client.query('select * from test_pg.char_test4').then(res => {
console.log('res.rows :::::: ', res)
}).finally(() => client.end())

Related

(PostgreSQL) 12.7 (Ubuntu 12.7-0ubuntu0.20.04.1) can't connect to the database?

This is on WSl 2 following the instructions from the official documentation
I created a simple postgresql and I try to connect to it like so:
const Sequelize = require("sequelize");
const sequelize = new Sequelize('postgres://postgres:w#localhost:5432/messenger', {
logging: false,
dialect: 'postgre'
});
async function test(){
try {
await sequelize.authenticate();
console.log('Connection has been established successfully.');
} catch (error) {
console.error('Unable to connect to the database:', error);
}
}
test();
This will tell me if the connection was successful or not, but for some reason, I keep getting this error. The URI string seems correct, the only user is the default created Postgres where I changed the password to be "w" for testing purposes.
Not sure what that 12/main part is about but the server is online so I am really not sure what the problem is.
Either you specified the wrong password for the database user "postgres", or your server isn't configured to allow password authentication on localhost. You can see this link: https://www.postgresql.org/docs/current/auth-methods.html for more information on the authentication methods postgresql supports, or this one: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html for information on how to set up authentication on your postgresql server.

Getting correct socketPath for TypeORM config

I'm trying to connect a Cloud Run service to Cloud SQL postgres instance. I believe I'm nearly there, but am having some trouble getting the deployed instance to connect properly. My local environment can connect (via SSL) to the database intended for production, but the deployed version can't...
I'm using TypeORM, and have everything setup properly in the configuration...
#Module({
imports: [
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (configService: ConfigService) => {
const socketPath = configService.get('DB_SOCKET_PATH');
const extra = socketPath ? {
socketPath: socketPath,
ssl: {
rejectUnauthorized: false,
ca: Buffer.from(process.env.DB_SSL_CA, 'base64').toString('ascii'),
cert: Buffer.from(process.env.DB_SSL_CERT, 'base64').toString('ascii'),
key: Buffer.from(process.env.DB_SSL_KEY, 'base64').toString('ascii'),
}
} : { };
return ({
type: 'postgres',
host: socketPath || configService.get('DB_HOST'),
port: configService.get('DB_PORT'),
username: configService.get('DB_USER'),
password: configService.get('DB_PASS'),
database: configService.get('DB_NAME'),
extra: extra,
entities: [__dirname + '/../../modules/**/*.entity{.ts,.js}'],
namingStrategy: new SnakeNamingStrategy(),
synchronize: true,
});
}
})
]
})
export class DatabaseModule { }
Despite that I'm getting an error when I try to use the socketPath as the host rather than the actual host variable (necessary for GCP). It seems that TypeORM is adding extra characters, /.s.PGSQL.5432, at the end of my connection string that I don't want. And just to clarify, the socket path is in the form of /cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE>.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
Error: connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
At an older point in time, this used to work for me but I guess something changed in the TypeORM library. Does anybody have any ideas on this? Thanks!
EDIT: As of now I've gotten it to connect to the server correctly, but it's now giving me an error that says the server doesn't support SSL connections, which makes no sense given that I can connect via SSL fine on my local machine...?
SOLUTION: The issue does not seem to any code's fault, but rather some networking stuff on the GCP side. I configured the service and database to run through a VPC then just used a private IP address for the host.
It seems that TypeORM is adding extra characters, /.s.PGSQL.5432
This is actually intended - the Postgres spec requires that the unix sockets end with this suffix.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
The error means that the socket wasn't found - usually because there was a misconfiguration and the Cloud SQL proxy couldn't start. You can check your logs at the instance start up to see if the proxy left any errors, but generally it'll come down to the following:
The Cloud SQL Admin API needs to be enabled
Your service account needs to have Cloud SQL Connect IAM role (or equivalent)
The service needs to be configured for Cloud SQL.
For a full list of instructions, see the Connecting from Cloud Run to Cloud SQL page.

Npgsql Provide Client Certificate and Key

I am trying to formulate a connection to a PGSQL server that requires both a client certificate and key to operate.
First, I can verify connections to the Postgres database using SQLGate work. Provide host, User, password, port, database and mark Use SSL, then under SSL provide the Certificate and Key. The connection does not operate without either of those items. Using NPGSQL, I provide all but the key as for some reason NpgsqlConnectionStringBuilder does not contain a definition for some sort of client key.
var connectionString = new NpgsqlConnectionStringBuilder();
connectionString.Host = rInfo.Host;
int portNumber = 5432;
int.TryParse(rInfo.Port, out portNumber);
connectionString.Port = portNumber;
connectionString.Database = rInfo.dbName;
connectionString.Username = rInfo.Username;
connectionString.Password = rInfo.Password;
connectionString.SslMode = SslMode.Prefer;
connectionString.TrustServerCertificate = true;
connectionString.ClientCertificate = rInfo.CertFilePath;
//Poke the database, see if we can get in.
try
{
NpgsqlConnection npgsqlConnection = new NpgsqlConnection(connectionString.ToString());
npgsqlConnection.ProvideClientCertificatesCallback += provideCertificates;
npgsqlConnection.UserCertificateValidationCallback += validateCertificates;
npgsqlConnection.Open();
npgsqlConnection.Close();
return connectionString.ToString();
}
The exception is:
Error 28000 : connection requires a valid client certificate
Which is to be expected since I'm not providing the key anywhere. I have tried forcing the key to be added to the connection string via guessing:
connectionString.Add(new KeyValuePair<string, object?>("Client Key", rInfo.KeyFilePath));
But that's unrecognized. Libpq's PG Connect documentation labels it as sslkey, but that comes back as unrecognized as well. My best guess is using ProvideClientCertificatesCallback callback to provide the certificate, but I don't know how to have it pair with a key since it's just asking for an X509CertificateCollection.
The previous tool we were using was provided by Devart, but we have lost the license. We also will be connecting to a range of databases (with the same schema) instead of just one.
What are my options?

Get the PostgreSQL server version from connection?

Is there anything in the modern PostgreSQL connection protocol that would indicate the server version?
And if not, is there a special low-level request that an endpoint can execute against an open connection to pull the server details that would contain the version?
I'm looking at a possible extension of node-postgres that would automatically provide the server version upon every fresh connection. And I want to know if this is at all possible.
Having to execute SELECT version() upon every new connection and then parsing it is too high-level for the base driver that manages the connection. It should be done on the protocol level.
After a bit of research, I found that PostgreSQL does provide server version during connection, within the start-up message.
And specifically within node-postgres driver, we can make Pool provide a custom Client that handles event parameterStatus on the connection, and exposes the server version:
const {Client, Pool} = require('pg');
class MyClient extends Client {
constructor(config) {
super(config);
this.connection.on('parameterStatus', msg => {
if (msg.parameterName === 'server_version') {
this.version = msg.parameterValue;
}
});
}
}
const cn = {
database: 'my-db',
user: 'postgres',
password: 'bla-bla',
Client: MyClient // here's our custom Client type
};
const pool = new Pool(cn);
pool.connect()
.then(client => {
console.log('Server Version:', client.version);
client.release(true);
})
.catch(console.error);
On my test PC, I use PostgreSQL v11.2, so this test outputs:
Server Version: 11.2
UPDATE - 1
Library pg-promise has been updated to support the same functionality in TypeScript. And you can find a complete example in this ticket.
UPDATE - 2
See example here:
// tests connection and returns Postgres server version,
// if successful; or else rejects with connection error:
async function testConnection() {
const c = await db.connect(); // try to connect
c.done(); // success, release connection
return c.client.serverVersion; // return server version
}

Access database which is running in EC2 instance through AWS-lambda function

I wrote the lambda function in python3.6 to access the postgresql database which is running in EC2 instance.
psycopg2.connect(user="<USER NAME>",
password="<PASSWORD>",
host="<EC2 IP Address>",
port="<PORT NUMBER>",
database="<DATABASE NAME>")
created deployment package with required dependencies as zip file and uploaded into AWS lambda.To build dependency i followed THIS reference guide.
And also configured Virtual Private Cloud (VPC) as default one and also included Ec2 instance details, but i couldn't get the connection from database. when trying to connect database from lambda result in timeout.
Lambda function:
from __future__ import print_function
import json
import ast,datetime
import psycopg2
def lambda_handler(event, context):
received_event = json.dumps(event, indent=2)
load = ast.literal_eval(received_event)
try:
connection = psycopg2.connect(user="<USER NAME>",
password="<PASSWORD>",
host="<EC2 IP Address>",
# host="localhost",
port="<PORT NUMBER>",
database="<DATABASE NAME>")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table limit 10"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from mobile table using cursor.fetchall")
mobile_records = cursor.fetchall()
print("Print each row and it's columns values")
for row in mobile_records:
print("Id = ", row[0], )
except (Exception,) as error :
print ("Error while fetching data from PostgreSQL", error)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!'),
'dt' : str(datetime.datetime.now())
}
I googled quite a lot, But i couldn't found any workaround for this.is there any way to accomplish this requirement?
Your configuration would need to be:
A database in a VPC
The Lambda function configured to use the same VPC as the database
A security group on the Lambda function (Lambda-SG)
A security group on the Database (DB-SG) that permits inbound connects from Lambda-SG on the relevant database port
That is, DB-SG refers to Lambda-SG.
For lambda to connect to any resources inside a VPC, it needs to setup ENIs to the related private subnets of the VPC. Have you set up the VPC association and security groups of the EC2 correctly?
You can refer https://docs.aws.amazon.com/lambda/latest/dg/vpc.html