Sail.js multiple connections on start - postgresql

I've got an odd problem - on start of my sails app (which is connecting with postgres and deployed on heroku ) there are multiple connections (around 10) to database, and since it's free account, if I then try to launch app on localhost to test some new code I get an error "too many connections for a role". So does anyone know why there are so many connections to database and can I change it, to have only one connection per app?
EDIT:
Error creating a connection to Postgresql: error: too many connections for role
"xwoellnkvjcupt"
Error creating a connection to Postgresql: error: too many connections for role
"xwoellnkvjcupt"
error: Hook failed to load: orm (error: too many connections for role "xwoellnkv
jcupt")
error: Error encountered while loading Sails core!
error: error: too many connections for role "xwoellnkvjcupt"
at Connection.parseE (C:\Studia\szachman2\node_modules\sails-postgresql\node
_modules\pg\lib\connection.js:561:11)
at Connection.parseMessage (C:\Studia\szachman2\node_modules\sails-postgresq
l\node_modules\pg\lib\connection.js:390:17)
at null. (C:\Studia\szachman2\node_modules\sails-postgresql\node_
modules\pg\lib\connection.js:98:18)
at CleartextStream.EventEmitter.emit (events.js:95:17)
at CleartextStream. (_stream_readable.js:746:14)
at CleartextStream.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at _stream_readable.js:401:7
at process._tickDomainCallback (node.js:459:13)
this is an error I am getting often when trying to test some new code on localhost.

#jantar #sgress454 I just added a troubleshooting message in sails-postgresql to try and make this better. Here's what it says:
-> Maybe your poolSize configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your poolSize set as something < 20. The default poolSize is 10.
To override the default poolSize, specify a poolSize property on the relevant Postgresql "connection" config object. If you're using Sails, this is generally located in config/connections.js, or wherever your environment-specific database configuration is set.
-> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured poolSize # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured poolSize (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of AVAILABLE connections to your Postgresql database (V), then you have problems. If this applies to you, try reducing your poolSize configuration. A reasonable poolSize setting would be V/N.

This is due to Sails's auto-migration feature which attempts to keep your models and database synced up. It's not intended to be used in production. You can turn auto-migration off on a single model by adding migrate: safe to the model definition:
module.exports = {
migrate: 'safe',
attributes: {...}
}
You can turn auto-migration off for all models by adding a model config, usually in your config/locals.js:
module.exports = {
model: {
migrate: 'safe'
},
environment: 'production',
...other local config...
}

A little update for the V1. Your adapter in config/datastore.js should look like this if you want to set a maximum size for the connection pool :
{
adapter: 'sails-postgresql',
url: 'yourconnectionurl',
max: 1 // This is the important part for poolSize, I set 1 because I don't want more than 1 connection ^^
}
If you want to know all infos you can set, look here : https://github.com/sailshq/machinepack-postgresql/blob/176413efeab90dc5099dc60718e8b520942ce3be/machines/create-manager.js , at line 162 :
// Basic:
'host', 'port', 'database', 'user', 'password', 'ssl',
// Advanced Client Config:
'application_name', 'fallback_application_name',
// General Pool Config:
'max', 'min', 'refreshIdle', 'idleTimeoutMillis',
// Advanced Pool Config:
// These should only be used if you know what you are doing.
// https://github.com/coopernurse/node-pool#documentation
'name', 'create', 'destroy', 'reapIntervalMillis', 'returnToHead',
'priorityRange', 'validate', 'validateAsync', 'log'

Related

creating a datasource for postgres schema based multitenancy and issues with connection pooling

From the typeorm docs:
Generally, you call initialize method of the DataSource instance on application bootstrap, and destroy it after you completely finished working with the database. In practice, if you are building a backend for your site and your backend server always stays running - you never destroy a DataSource.
But, for implementing postgres's schema based multitenancy, I'm scoping connections per request, because, each request has to be sent to a different schema. So, in my getConnection option, I'm doing this:
async function getTenantConnection(
tenantName: string,
connectionOptions?: PostgresConnectionOptions,
) {
if (!connectionOptions) {
connectionOptions = baseConnection;
}
const options: PostgresConnectionOptions = {
...connectionOptions,
schema: tenantName,
entities: [__dirname + '/../**/*.entity.js'],
synchronize: false,
};
const dataSource = new DataSource(options);
return await dataSource.initialize();
}
so on each request, I'm doing getTenantConnection which sort of initialized the database. The previously available getConnection() seems deprecated, and now doing stress tests on the app, I'm getting TCP connection issues, which I simply cannot debug:
5;3m[ExceptionsHandler] connect ETIMEDOUT 20.119.245.111:5432
2023-01-31T10:40:40.581303183Z Error: connect ETIMEDOUT 20.119.245.111:5432
2023-01-31T10:40:40.581370686Z at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16)
2023-01-31T10:40:40.581381886Z at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17)
2023-01-31T10:40:40.589395960Z [Nest] 53 - 01/31/2023, 10:40:40 AM ERROR [ExceptionsHandler] connect ETIMEDOUT 20.119.245.111:5432
I'm just speculating that the database pool has something to do with this. I don't understand the code fully, but the source code for typeorm doesn't seem to contain any pooling done in the initialize() method section as well. I had tried to take reference from this article which demonstrates schema based multitenancy in postgres using typeorm , but methods available there aren't available anymore so I had to resolve to using .initialize(). Please let me know how I can go about implementing this.

ERROR: [dn3]: SSL connection has been closed unexpectedly

I have been working on the timescaleDB multinode clustering concept. When I go to add a data node, I run the add_data_node query in the access node at that time I got an error like SSL Connection has been closed unexpectedly
Config in the access node
postgresql.conf
listen_addresses = '*'
enable_partitionwise_aggregate = on
jit = off
Config in Data Node
Postgresql.conf
listen_addresses = '*'
max_prepared_transactions = 150
wal_level = logical
If you know the root cause of the problem let me know
While running the add_data_node query, the connection will be made. if any unexpected error will be thrown from the data_node service. it throws the error with the SSL connection has been closed unexpectedly.
To check the PostgreSQL database error log we need to enable two of the configurations.
log_connections = on
log_disconnections = on
Through the log what I have found is, in the postgresql.conf file, 'shared_preload_libraries' was with the empty string. I want to add the string 'timescaldb' for that variable (shared_preload_libraries).
I will not recommend this enabling log for production.
https://www.digitalocean.com/community/questions/how-can-i-investigate-postgres-managed-server-error-ssl-connection-has-been-closed-unexpectedly

PostgresSQL: org.postgresql.util.PSQLException: ERROR: Unsupported startup parameter: search_path

When I try to connect to the database on postgres via jdbc, I get the following error:
org.postgresql.util.PSQLException: ERROR: Unsupported startup parameter: search_path
This is how I create the connection:
val connection = DriverManager.getConnection(profile.connection + Option(profile.catalog).getOrElse("")+ "?currentSchema="+Option(profile.schema).getOrElse(""),
profile.user, profile.password)
I use scala and a custom version of postgres.
pgbauncer
In short, pgbouncer at least my version does not work with the search_path parameter, this discussion led me to this idea. There are two ways to fix this problem:
Change the pgbouncer config file by adding
IGNORE_STARTUP_PARAMETERS: search_path
Make a connection without using the currentSchema parameter in the connection string and create connection like this:
val connection =
DriverManager.getConnection(
profile.connection + Option(profile.catalog).getOrElse(""),
profile.user, profile.password)
Then he will choose the scheme according to the rule set, in search_path, they usually set something like "$user", public, in this case, when connecting, he first tries to choose the same scheme as the user name, and if he does not find such a scheme, he chooses public.

What's the difference between ORM/query builder library connection pool size and pgbouncer connection pool size?

I am confused about pgbouncer pool size configuration and ORM(like sequelize.js), query builder(like knex.js) library pool size configuration. The architecture like this:
Application code => pgbouncer => postgresql
pgbouncer.ini:
;; ...
;; Default pool size. 20 is good number when transaction pooling
;; is in use, in session pooling it needs to be the number of
;; max clients you want to handle at any moment
;default_pool_size = 20
;; ...
sequelize connection pool configuration:
const sequelize = new Sequelize(/* ... */, {
// ...
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
knex.js connection pool configuration:
var knex = require('knex')({
client: 'mysql',
connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'myapp_test'
},
pool: { min: 0, max: 7 }
});
What happened if I use sequelize.js connection pool configuration and pgbouncer connection pool size configuration together? Which configuration does the database server use? Should I use only one of them? Thanks.
If you have for example 3 application processes running knex or sequelize, then you should setup pgbouncer poolsize to be 3 times bigger than single knex / sequelize pool uses.
Then you also need to make sure that postgres server also has enough connections configured to handle pgbouncer connections.
Though as #jjanes said. There is no reason to use pgbouncer with knex / sequelize, because they already provide pooling. I suppose pgbouncer is meant to be used with frameworks, which doesn't support pooling. For example if PHP or cgi stript reinitializes on every page load and makes calls to database.
It rarely makes sense to daisy chain connection pools together. So there is probably no point in using pgbouncer in addition to the built-in ones. The database server doesn't know about your connection poolers except to the extent the pool manager sends its own explicit commands to the database, and it has its own configuration file which it uses (postgresql.conf).

Specify `statement_timeout` in Postgresql with sqlalchemy?

The following statement_timeout option works on some Postgresql databases and on others, I get Unsupported startup parameter: options. Why?
Is this possibly a difference between Postgres 9.4 and 9.6? This works with the former servers and fails with the latter.
from sqlalchemy import create_engine
# As is: Unsupported startup parameter: options
db_engine = create_engine("postgresql://user:pw#host/database",
connect_args={"options": "-c statement_timeout=1000"})
with db_engine.connect() as db_connection:
print("got it")
Specifically, I get:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) ERROR: Unsupported startup parameter: options
You may have been connecting to those databases via PgBouncer.
If so, add ignore_startup_parameters = options to pgbouncer.ini under the [pgbouncer] section.
From https://www.pgbouncer.org/config.html#ignore_startup_parameters:
By default, PgBouncer allows only parameters it can keep track of in startup packets: client_encoding, datestyle, timezone and standard_conforming_strings. All others parameters will raise an error. To allow others parameters, they can be specified here, so that PgBouncer knows that they are handled by the admin and it can ignore them.
Default: empty
References:
https://github.com/pgbouncer/pgbouncer/issues/295
https://github.com/pgbouncer/pgbouncer/issues/496