Postgresql - Error Connection terminated due to connection timeout - postgresql

I have created a google cloud function in google cloud which will connect to my postgresql instance created in Google cloud.
I am using 'pg' node module.
I have create a private IP for this.
I am getting following error:
Error: Connection terminated due to connection timeout at
Timeout.connectionTimeoutHandle.setTimeout
(/workspace/node_modules/pg/lib/client.js:106:28) at ontimeout
(timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout
(timers.js:263:5) at Timer.processTimers (timers.js:223:10)
when trying to query the database in google cloud.
This is my configuration which I am using in google cloud function.
{
"host": "",
"user": "",
"pw": "",
"db": "<database_name>",
"port": "5432",
"table": "<table_name",
"max": 100,
"idleTimeoutMillis": 30000,
"connectionTimeoutMillis": 30000 }
Please help me with this

Upgrading pg npm version resolved the issue.
"pg": "^7.3.0",
to
"pg": "^8.7.1",
If the issue still persists, then check your node version. Upgrade node to >=14.

According to the official documentation:
Connecting from Cloud Functions to Cloud SQL
To connect directly with private IP, you need to:
1.Make sure that the Cloud SQL instance created above has a private IP
address. If you need to add one, see the Configuring private IP page
for instructions.
2.Create a Serverless VPC Access connector in the same
VPC network as your Cloud SQL instance. Unless you're using Shared
VPC, a connector must be in the same project and region as the
resource that uses it, but the connector can send traffic to resources
in different regions.
3.Configure Cloud Functions to use the connector. Connect using your
instance's private IP and port 5432.
4.Connect using your instance's private IP and port 5432
Also you can find the node js code to establish the connection to database:
const connectWithTcp = config => {
// Extract host and port from socket address
const dbSocketAddr = process.env.DB_HOST.split(':'); // e.g. '127.0.0.1:5432'
// Establish a connection to the database
return Knex({
client: 'pg',
connection: {
user: process.env.DB_USER, // e.g. 'my-user'
password: process.env.DB_PASS, // e.g. 'my-user-password'
database: process.env.DB_NAME, // e.g. 'my-database'
host: dbSocketAddr[0], // e.g. '127.0.0.1'
port: dbSocketAddr[1], // e.g. '5432'
},
// ... Specify additional properties here.
...config,
});
};

Related

connecting to mongodb replicaSet with nestjs and typeorm is not working

problem
I'm trying to connect to mongodb with nestjs(^8.2.3) and typeorm(^0.2.28)
In test environment, connecting to mongodb standalone server is working. For your information, node mongodb library version is ^3.6.2.
production sample code(nestjs server)
I referred the typeorm code to write mongodb options
import { TypeOrmModule } from '#nestjs/typeorm';
import { MongoConnectionOptions } from 'typeorm/driver/mongodb/MongoConnectionOptions';
export const configForOrmModule = TypeOrmModule.forRootAsync({
imports: [],
useFactory: async () => {
const mongodbConfig: MongoConnectionOptions = {
type: 'mongodb',
username,
// for replicaSet (production)
hostReplicaSet: 'server1.example.com:20723,server2.example.com:20723,server.example.com:20723',
replicaSet: 'replicaSetName'
port: Number(port),
password: encodeURIComponent(password),
database,
authSource,
synchronize: true,
useUnifiedTopology: true,
entities: [Something],
};
return mongodbConfig;
},
inject: [],
});
But in production environment, when nestjs server try to connect to mongodb replicaSet, the server get this server selection loop error message over and over again like below. Interesting thing was the domain that the server tried to connect was different from replicaSet hosts(ex. another-hostname not included in server1.example.com:20723,server2.example.com:20723,server.example.com:20723). (+ edited: the another hostname is actual physical server indicated by the dns(server.example.com))
[39m01/28/2022, 2:39:16 AM [31m ERROR[39m [38;5;3m[TypeOrmModule] [39m[31mUnable to connect to the database. Retrying (3)...[39m
MongoServerSelectionError: getaddrinfo ENOTFOUND <another-hostname>
at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
what I’ve tried but these not worked
remove useUnifiedTopology: true option
downgrade mongodb library version to 3.5.11 (I've read in mongodb community there are something bug with topology after 3.6 version)
use host option not the hostReplicaSet
if you need more information, please tell me. thank you for your helping.
It was kubernates DNS issue. The hostReplicaSet server1.example.com:20723,... is resolved to host1 (physical server name. without example.com) but, k8s doesn't know it. so connection was failed.
there are two options
update kubernates /etc/hosts setting to add host1 -> host1.example.com
or update mongodb hostname host1 -> host1.example.com

Cannot connect to Atlas MongoDB from Azure Functions

I just created an Azure Function that should connect to my instance of MongoDB on Atlas, basically following this tutorial:
https://www.mongodb.com/blog/post/how-to-integrate-azure-functions-with-mongodb
From my local development with Visual Studio, everything works fine and I can connect to the Atlas environment, but when I deploy the code on Azure, the following exception raises:
A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017" }", EndPoint: "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.
---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.
If I instead set the Network Access to "everywhere", again everything works fine.
Now, in the Network Access panel of Atlas, I added the IPs retrieved from the Azure Portal under my function app => Networking => Inbound traffic and Outbound traffic (a total of 1 IP for inbound and 3 IPs for outbound).
But adding those 4 IPs has not solved the issue.
What else should I do?
If you are using static IP, for a workaround, you can check this: How do I set a static IP in Functions?
You can also set up a Private Endpoint and for the security of the database credentials check secrets engine integration using vault.
You can refer to How to connect Azure Function with MongoDB Atlas ,Azure functions unable to connect with Mongo Db Atlas M10 and How to connect Azure Function with MongoDB Atlas

Getting correct socketPath for TypeORM config

I'm trying to connect a Cloud Run service to Cloud SQL postgres instance. I believe I'm nearly there, but am having some trouble getting the deployed instance to connect properly. My local environment can connect (via SSL) to the database intended for production, but the deployed version can't...
I'm using TypeORM, and have everything setup properly in the configuration...
#Module({
imports: [
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (configService: ConfigService) => {
const socketPath = configService.get('DB_SOCKET_PATH');
const extra = socketPath ? {
socketPath: socketPath,
ssl: {
rejectUnauthorized: false,
ca: Buffer.from(process.env.DB_SSL_CA, 'base64').toString('ascii'),
cert: Buffer.from(process.env.DB_SSL_CERT, 'base64').toString('ascii'),
key: Buffer.from(process.env.DB_SSL_KEY, 'base64').toString('ascii'),
}
} : { };
return ({
type: 'postgres',
host: socketPath || configService.get('DB_HOST'),
port: configService.get('DB_PORT'),
username: configService.get('DB_USER'),
password: configService.get('DB_PASS'),
database: configService.get('DB_NAME'),
extra: extra,
entities: [__dirname + '/../../modules/**/*.entity{.ts,.js}'],
namingStrategy: new SnakeNamingStrategy(),
synchronize: true,
});
}
})
]
})
export class DatabaseModule { }
Despite that I'm getting an error when I try to use the socketPath as the host rather than the actual host variable (necessary for GCP). It seems that TypeORM is adding extra characters, /.s.PGSQL.5432, at the end of my connection string that I don't want. And just to clarify, the socket path is in the form of /cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE>.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
Error: connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
At an older point in time, this used to work for me but I guess something changed in the TypeORM library. Does anybody have any ideas on this? Thanks!
EDIT: As of now I've gotten it to connect to the server correctly, but it's now giving me an error that says the server doesn't support SSL connections, which makes no sense given that I can connect via SSL fine on my local machine...?
SOLUTION: The issue does not seem to any code's fault, but rather some networking stuff on the GCP side. I configured the service and database to run through a VPC then just used a private IP address for the host.
It seems that TypeORM is adding extra characters, /.s.PGSQL.5432
This is actually intended - the Postgres spec requires that the unix sockets end with this suffix.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
The error means that the socket wasn't found - usually because there was a misconfiguration and the Cloud SQL proxy couldn't start. You can check your logs at the instance start up to see if the proxy left any errors, but generally it'll come down to the following:
The Cloud SQL Admin API needs to be enabled
Your service account needs to have Cloud SQL Connect IAM role (or equivalent)
The service needs to be configured for Cloud SQL.
For a full list of instructions, see the Connecting from Cloud Run to Cloud SQL page.

Connect to Amazon RDS PostgresQL Proxy with IAM Credentials using TypeORM

I'm trying to figure out how to connect to a RDS PG Proxy within a lambda function using TypeORM (so there's no issues establishing connections). I'm able to connect to the RDS instance with the Lambda function successfully - however, when I point the information at the proxy (change the environment variables within the Lambda function) I am greeted with the following error:
{
"errorType": "Error",
"errorMessage": "read ECONNRESET",
"code": "ECONNRESET",
"errno": "ECONNRESET",
"syscall": "read",
"stack": [
"Error: read ECONNRESET",
" at TCP.onStreamRead (internal/stream_base_commons.js:205:27)"
]
}
Here is the code used to create the connection with TypeORM:
const config = getDBConfig();
connection = await createConnection(config);
// Retrieve database connection options
const getDBConfig = (): ConnectionOptions => {
// Use IAM-based authentication to connect
const signer = new RDS.Signer({
region: "us-east-1",
username: process.env.USERNAME,
hostname: process.env.HOSTNAME,
port: 5432,
});
// Retrieve password dynamically from RDS
const token = signer.getAuthToken({
username: process.env.USERNAME,
});
// Return configuration object
return {
username: process.env.USERNAME,
host: process.env.HOSTNAME,
port: 5432,
password: token,
ssl: {
ca: fs.readFileSync("./config/rds-ca-2019-root.pem").toString(),
},
type: "postgres",
database: "postgres",
synchronize: false,
entities: [],
};
};
In terms of the two environment variables, HOSTNAME is equal to the URL provided by RDS proxy, and USERNAME is the username assigned within the secret for the RDS Proxy. Both the Lambda function and RDS Proxy have been given admin access, just to ensure there's no interference there (I know this is horrible, will reduce privileges once I get this working!). IAM authentication has been set to required for the proxy.
Update 8/14/2020
This article explains how you would connect RDS MySQL Proxy with TypeORM, still have not figured out how to connect to a RDS PG Proxy though.
https://dev.to/vikasgarghb/rds-proxy-via-sam-15gn
I've finally found the instructions to setup DB user for PG in the AWS docs. Posting this here for anyone also having trouble finding them.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html#UsingWithRDS.IAMDBAuth.DBAccounts.PostgreSQL
Basically you just need to add user to existing rds_iam group.
CREATE USER lambda;
GRANT ALL PRIVILEGES ON DATABASE postgres TO lambda;
GRANT rds_iam TO lambda;

"[NoHostAvailableException: All host(s) tried for query failed" exception occurs in connecting with cassandra cluster

var cluster: Cluster = null
var session: Session = null
cluster = Cluster.builder().addContactPoints("192.168.1.3","192.168.1.2").build()
val metadata = cluster.getMetadata()
printf("Connected to cluster: %s\n",
metadata.getClusterName())
metadata.getAllHosts() map {
case host =>
printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack())
}
i am not able to connect to cassandra cluster using this code . It is giving me error that-
[NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.1.3 ([/192.168.1.3] Cannot connect), /192.168.1.2 ([/192.168.1.2] Cannot connect))]
What is my mistake in above code.
Your code looks ok on first blush. The error suggests that Cassandra is not actually running on port 9042 (the default) on IPs "192.168.1.3","192.168.1.2"
If Cassandra is running on those IPs but it's another port you will need to use
int port = 19042; // Put the correct port here
cluster = Cluster.builder().addContactPoints("192.168.1.3","192.168.1.2").withPort(port).build()
Remote access to Cassandra is via its thrift port (although note that the JMX port can be used to perform some limited operations).
The thrift port is defined in cassandra.yaml by the rpc_port parameter, which defaults to 9160. Your cassandra node should be bound to the IP address of your server's network card - it shouldn't be 127.0.0.1 or localhost which is the loopback interface's IP, binding to this will prevent direct remote access. You configure the bound address with the rpc_address parameter in cassandra.yaml. Setting this to 0.0.0.0 says "listen on all network interfaces" which may or may not be suitable for you.
To make a connection you can use:
The cassandra-cli in the cassandra distribution's bin directory provides simple get / set / list operations and depends on Java
The cqlsh shell which provides CQL access to cassandra, this depends on Python
A higher level interface such as Apollo
you can use port 9042 and try to connect with ip local host or other machine as follows:
public String serverIP = "127.0.0.1"; //change ip with yours
//public String serverIP = "52.10.160.197"; //for prod
public String keyspace = "database name"; //for prod
//public String keyspace = "dbname_test"; //for staging
Cluster cluster = Cluster.builder().addContactPoint(serverIP).build();
Session session = cluster.connect(keyspace);