Getting correct socketPath for TypeORM config - postgresql

I'm trying to connect a Cloud Run service to Cloud SQL postgres instance. I believe I'm nearly there, but am having some trouble getting the deployed instance to connect properly. My local environment can connect (via SSL) to the database intended for production, but the deployed version can't...
I'm using TypeORM, and have everything setup properly in the configuration...
#Module({
imports: [
TypeOrmModule.forRootAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (configService: ConfigService) => {
const socketPath = configService.get('DB_SOCKET_PATH');
const extra = socketPath ? {
socketPath: socketPath,
ssl: {
rejectUnauthorized: false,
ca: Buffer.from(process.env.DB_SSL_CA, 'base64').toString('ascii'),
cert: Buffer.from(process.env.DB_SSL_CERT, 'base64').toString('ascii'),
key: Buffer.from(process.env.DB_SSL_KEY, 'base64').toString('ascii'),
}
} : { };
return ({
type: 'postgres',
host: socketPath || configService.get('DB_HOST'),
port: configService.get('DB_PORT'),
username: configService.get('DB_USER'),
password: configService.get('DB_PASS'),
database: configService.get('DB_NAME'),
extra: extra,
entities: [__dirname + '/../../modules/**/*.entity{.ts,.js}'],
namingStrategy: new SnakeNamingStrategy(),
synchronize: true,
});
}
})
]
})
export class DatabaseModule { }
Despite that I'm getting an error when I try to use the socketPath as the host rather than the actual host variable (necessary for GCP). It seems that TypeORM is adding extra characters, /.s.PGSQL.5432, at the end of my connection string that I don't want. And just to clarify, the socket path is in the form of /cloudsql/<PROJECT_ID>:<REGION>:<INSTANCE>.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
Error: connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1141:16)
At an older point in time, this used to work for me but I guess something changed in the TypeORM library. Does anybody have any ideas on this? Thanks!
EDIT: As of now I've gotten it to connect to the server correctly, but it's now giving me an error that says the server doesn't support SSL connections, which makes no sense given that I can connect via SSL fine on my local machine...?

SOLUTION: The issue does not seem to any code's fault, but rather some networking stuff on the GCP side. I configured the service and database to run through a VPC then just used a private IP address for the host.

It seems that TypeORM is adding extra characters, /.s.PGSQL.5432
This is actually intended - the Postgres spec requires that the unix sockets end with this suffix.
[Nest] 28532 - 02/15/2021, 2:25:07 PM [ExceptionHandler] connect ENOENT <DB_SOCKET_PATH>/.s.PGSQL.5432 +3ms
The error means that the socket wasn't found - usually because there was a misconfiguration and the Cloud SQL proxy couldn't start. You can check your logs at the instance start up to see if the proxy left any errors, but generally it'll come down to the following:
The Cloud SQL Admin API needs to be enabled
Your service account needs to have Cloud SQL Connect IAM role (or equivalent)
The service needs to be configured for Cloud SQL.
For a full list of instructions, see the Connecting from Cloud Run to Cloud SQL page.

Related

connecting to mongodb replicaSet with nestjs and typeorm is not working

problem
I'm trying to connect to mongodb with nestjs(^8.2.3) and typeorm(^0.2.28)
In test environment, connecting to mongodb standalone server is working. For your information, node mongodb library version is ^3.6.2.
production sample code(nestjs server)
I referred the typeorm code to write mongodb options
import { TypeOrmModule } from '#nestjs/typeorm';
import { MongoConnectionOptions } from 'typeorm/driver/mongodb/MongoConnectionOptions';
export const configForOrmModule = TypeOrmModule.forRootAsync({
imports: [],
useFactory: async () => {
const mongodbConfig: MongoConnectionOptions = {
type: 'mongodb',
username,
// for replicaSet (production)
hostReplicaSet: 'server1.example.com:20723,server2.example.com:20723,server.example.com:20723',
replicaSet: 'replicaSetName'
port: Number(port),
password: encodeURIComponent(password),
database,
authSource,
synchronize: true,
useUnifiedTopology: true,
entities: [Something],
};
return mongodbConfig;
},
inject: [],
});
But in production environment, when nestjs server try to connect to mongodb replicaSet, the server get this server selection loop error message over and over again like below. Interesting thing was the domain that the server tried to connect was different from replicaSet hosts(ex. another-hostname not included in server1.example.com:20723,server2.example.com:20723,server.example.com:20723). (+ edited: the another hostname is actual physical server indicated by the dns(server.example.com))
[39m01/28/2022, 2:39:16 AM [31m ERROR[39m [38;5;3m[TypeOrmModule] [39m[31mUnable to connect to the database. Retrying (3)...[39m
MongoServerSelectionError: getaddrinfo ENOTFOUND <another-hostname>
at Timeout._onTimeout (/home/node/app/node_modules/mongodb/lib/core/sdam/topology.js:430:30)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
what I’ve tried but these not worked
remove useUnifiedTopology: true option
downgrade mongodb library version to 3.5.11 (I've read in mongodb community there are something bug with topology after 3.6 version)
use host option not the hostReplicaSet
if you need more information, please tell me. thank you for your helping.
It was kubernates DNS issue. The hostReplicaSet server1.example.com:20723,... is resolved to host1 (physical server name. without example.com) but, k8s doesn't know it. so connection was failed.
there are two options
update kubernates /etc/hosts setting to add host1 -> host1.example.com
or update mongodb hostname host1 -> host1.example.com

Postgresql - Error Connection terminated due to connection timeout

I have created a google cloud function in google cloud which will connect to my postgresql instance created in Google cloud.
I am using 'pg' node module.
I have create a private IP for this.
I am getting following error:
Error: Connection terminated due to connection timeout at
Timeout.connectionTimeoutHandle.setTimeout
(/workspace/node_modules/pg/lib/client.js:106:28) at ontimeout
(timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout
(timers.js:263:5) at Timer.processTimers (timers.js:223:10)
when trying to query the database in google cloud.
This is my configuration which I am using in google cloud function.
{
"host": "",
"user": "",
"pw": "",
"db": "<database_name>",
"port": "5432",
"table": "<table_name",
"max": 100,
"idleTimeoutMillis": 30000,
"connectionTimeoutMillis": 30000 }
Please help me with this
Upgrading pg npm version resolved the issue.
"pg": "^7.3.0",
to
"pg": "^8.7.1",
If the issue still persists, then check your node version. Upgrade node to >=14.
According to the official documentation:
Connecting from Cloud Functions to Cloud SQL
To connect directly with private IP, you need to:
1.Make sure that the Cloud SQL instance created above has a private IP
address. If you need to add one, see the Configuring private IP page
for instructions.
2.Create a Serverless VPC Access connector in the same
VPC network as your Cloud SQL instance. Unless you're using Shared
VPC, a connector must be in the same project and region as the
resource that uses it, but the connector can send traffic to resources
in different regions.
3.Configure Cloud Functions to use the connector. Connect using your
instance's private IP and port 5432.
4.Connect using your instance's private IP and port 5432
Also you can find the node js code to establish the connection to database:
const connectWithTcp = config => {
// Extract host and port from socket address
const dbSocketAddr = process.env.DB_HOST.split(':'); // e.g. '127.0.0.1:5432'
// Establish a connection to the database
return Knex({
client: 'pg',
connection: {
user: process.env.DB_USER, // e.g. 'my-user'
password: process.env.DB_PASS, // e.g. 'my-user-password'
database: process.env.DB_NAME, // e.g. 'my-database'
host: dbSocketAddr[0], // e.g. '127.0.0.1'
port: dbSocketAddr[1], // e.g. '5432'
},
// ... Specify additional properties here.
...config,
});
};

Cant Access PostgreSQL on Google Cloud SQL from NestJS project on Google App Engine

This is my first question in Stack Overflow so please excuse me when my information is lack.
Issue
I am struggling to connect PostgreSQL on CloudSQL from NestJS on Google App Engine.
When I try to use the application in local environment the program works but when it comes to production in Google App Engine then it does not work.
Since i struggled days, I decided to ask awesome community here.
My Environment
Node.js: v10.19.0
NestJS: 6.10.5
TypeORM
PostgreSQL: 11.5.1
My app.yaml
runtime: nodejs10
env: standard
default_expiration: "4d 5h"
env_variables:
DATABASE_HOST: < public IP for Cloud SQL instance >
DATABASE_USERNAME: username
DATABASE_PASSWORD: password
DATABASE_NAME: databasename
INSTANCE_CONNECTION_NAME: "PROJECT_ID:REGION:INSTANCE_ID:DATABASE_NAME"
handlers:
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Error
[Nest] 18 - 02/27/2020, 8:25:46 AM [TypeOrmModule] Unable to connect to the database. Retrying (3)... +34816ms
2020-02-27 08:25:46 default[20200227t163916] Error: connect ETIMEDOUT 34.84.188.209:5432 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:
Others Operations
GAE Service Account
Also I added the Cloud SQL Client authority my GAE Service account (something like this service-PROJECT_ID#gae-api-prod.google.com.iam.gserviceaccount.com).
I also added package.json as written in below:
"engines": {
"node": "10.x.x"
},
In the typeorm options, I added extra socketpath.
extra: {
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>/`,
},
I do not understand if this option should be set or not (I have tried both).
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>/.s.PGSQL.5432
or
socketPath: `/cloudsql/<INSTANCE_CONNECTION_NAME>
According to the example provided in GitHub
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/blob/master/appengine/cloudsql_postgresql/app.flexible.yaml
The INSTANCE_CONNECTION_NAME environment variable does not include the DATABASE_NAME as a parameter.
e.g. my-awesome-project:us-central1:my-cloud-sql-instance
Maybe this could be causing that the name of the instance is not being resolved for the Proxy.

Sails.js - Authorisation issues with remote MongoDB on mLab but working fine locally

Recently, I took over a Sails.js application created for our company by a small team of web developers. They provided me with the source and a database dump. Now, my task is to get it up and running on Heroku. While everything is working okay when I run the app locally, with the remote connection there is an error on startup that says:
MongoError: not authorized on heroku_gbntc8sf to execute command { createIndexes: "agendaJobs", indexes: [ { key: { name: 1, priority: -1, lockedAt: 1, nextRunAt: 1, disabled: 1 }, name: "findAndLockNextJobIndex1" }, { key: { name: 1, lockedAt: 1, priority: -1, nextRunAt: 1, disabled: 1 }, name: "findAndLockNextJobIndex2" } ] }
at Function.MongoError.create ([ROOT_DIR]/node_modules/mongodb-core/lib/error.js:31:11)
at [ROOT_DIR]/node_modules/mongodb-core/lib/topologies/server.js:793:66
at Callbacks.emit ([ROOT_DIR]/node_modules/mongodb-core/lib/topologies/server.js:94:3)
at null.messageHandler ([ROOT_DIR]/node_modules/mongodb-core/lib/topologies/server.js:235:23)
at Socket.<anonymous> ([ROOT_DIR]/node_modules/mongodb-core/lib/connection/connection.js:259:22)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
at Socket.Readable.push (_stream_readable.js:110:10)
at TCP.onread (net.js:523:20)
Here's a quick checklist of what I've done already:
checked the build log on Heroku - no errors or warnings;
set up mLab Heroku add-on, exported the database, done some manual checks from the mLab dashboard - everything looks okay;
logged in to the database remotely from the mongo command and a mongo:// URL, ran a few simple queries, and obtained information on the database user privileges;
created an identical user (with the heroku_gbntc8sf username, same password, same role, etc.) in the local database.
Here's what the connection configuration looks like:
// config/connections.js
module.exports.connections = {
mongodb: {
adapter: 'sails-mongo',
user: 'heroku_gbntc8sf',
password: [HIDDEN],
host: 'ds159387.mlab.com',
port: 59387,
database: 'heroku_gbntc8sf'
},
// ...
}
// config/env/development.js
module.exports = {
models: {
connection: 'mongodb'
},
// ...
}
// config/env/production.js
module.exports = {
models: {
connection: 'mongodb'
},
// ...
}
At the moment I'm running the server locally, trying to connect to the remote database, to eliminate as many variables as possible. Like I mentioned above, when I set host to '127.0.0.1' and port to 27017, everything works okay. The heroku_gbntc8sf user has basic readWrite permissions in both databases (local and remote). In fact, those two databases are pretty much identical, as far as I know. And yet...
I've read a sizeable chunk of the Sails.js documentation, as well as, the documentation on the sails-mongo adapter. I've searched for similar questions, but I couldn't find anything relevant. I've tried many different things, including a couple of different ways to configure the database connection, but that error is always there.
The reason why I'm posting to StackOverflow is that I cannot rely on the support from the original authors of the app at the moment. Also, I'm new to Sails.js, so I might be doing something wrong without even knowing. I was hoping that I could get away with treating the app as a 'black box' (or like a generic Node application), since my job is only to start the app on Heroku.
I've successfully used mLab in a Sails project recently, but I've used the Mongo URL string format, for example...
mongodbServer: {
adapter: 'sails-mongo',
url : "mongodb://dandanknight:som3P455w0rd#ds044979.mlab.com:44979/databasename"
}
Not sure if it helps, but can't hurt to try! It's also the only way I've successfully got a replicaSet working in Sails incidentally.
It's confusing, but I read the sails-mongo docs as "URL is the way forward, and passing an object is legacy usage" (here)

SailsJS deployment to Heroku, connect to Mongolabs MongoDB

I am right now attempting my first Heroku deployment of a SailsJS API. My app uses SailsJS v0.11 andsails-mongo 0.11.2.
I have updated config/connections.js to include the connection information to MongoDB database I have hosted for free at Mongolab.
mongodb: {
adapter: 'sails-mongo',
url: "mongodb://db-user:password123#ds047812.mongolab.com:47812/testing-db"
}
Also updated config/models.js to point to that adapter.
module.exports.models = {
connection: 'mongodb',
migrate: 'safe'
};
This is basically all I have changed from running the code locally, when I deploy to Heroku the app crashes and I get this error...
/home/zacharyhustles/smallChangeAPI/node_modules/connect-mongo/lib/connect-mongo.js:186
throw err;
^
at Socket.emit (events.js:107:17)
2015-07-08T19:37:00.778316+00:00 app[web.1]:
at Socket.<anonymous> (/app/node_modules/connect-mongo/node_modules/mongodb/lib/mongodb/connection/connection.js:534:10)
Error: Error connecting to database: failed to connect to [localhost:27017]
How do I get rid of this, and make sure Sails does not try connecting to localhost db?
Ok, the problem was with storing sessions.
My solution was to setup a Redis database to store sessions.
In config/sessions.js make sure everything is commented out except for the method you want for session store.
Mine looked like this:
adapter: 'redis',
host: 'example.redistogo.com',
port: 1111,
db: '/redistogo',
pass: 'XXXXXYYYYYYXYXYXYYX',
This solved my posted problem, hope this helps another person out.