I have been working on the timescaleDB multinode clustering concept. When I go to add a data node, I run the add_data_node query in the access node at that time I got an error like SSL Connection has been closed unexpectedly
Config in the access node
postgresql.conf
listen_addresses = '*'
enable_partitionwise_aggregate = on
jit = off
Config in Data Node
Postgresql.conf
listen_addresses = '*'
max_prepared_transactions = 150
wal_level = logical
If you know the root cause of the problem let me know
While running the add_data_node query, the connection will be made. if any unexpected error will be thrown from the data_node service. it throws the error with the SSL connection has been closed unexpectedly.
To check the PostgreSQL database error log we need to enable two of the configurations.
log_connections = on
log_disconnections = on
Through the log what I have found is, in the postgresql.conf file, 'shared_preload_libraries' was with the empty string. I want to add the string 'timescaldb' for that variable (shared_preload_libraries).
I will not recommend this enabling log for production.
https://www.digitalocean.com/community/questions/how-can-i-investigate-postgres-managed-server-error-ssl-connection-has-been-closed-unexpectedly
Related
Trying to achieve sync streaming to barman server and i need to add an entry to postgresql.conf for this parameter, which already has an entry and tried a few variations but does not work. Any ideas? Also tried '&&' but in vain
synchronous_standby_names='ANY 1 (*)',barman-wal-archive
2022-06-10 16:50:54.272 BST [11241-43] # app= LOG: syntax error in
file "/var/lib/pgsql/13/data/postgresql.conf" line 22, near token ","
2022-06-10 16:50:54.272 BST [11241-44] # app= LOG: configuration file
"/var/lib/pgsql/13/data/postgresql.conf" contains errors; no changes
were applied
The syntax you are using is not valid, and you won't be able to specify that Barman should be kept synchronous and any one of the others. The best you can do is
synchronous_standby_names = 'FIRST 2 ("barman-wal-archive", standby1, standby2, standby3)'
(You have to double quote all names that are not standard SQL identifiers, for example if they contain -.)
Then PostgreSQL will always keep Barman synchronized, as well as the first available standby server. But that won't have transactions fail if Barman is not available, which seems to be what you want.
Keep just
synchronous_standby_names='ANY 1 (*)'
and set
synchronous_commit = on
or
synchronous_commit = remote_write
The following statement_timeout option works on some Postgresql databases and on others, I get Unsupported startup parameter: options. Why?
Is this possibly a difference between Postgres 9.4 and 9.6? This works with the former servers and fails with the latter.
from sqlalchemy import create_engine
# As is: Unsupported startup parameter: options
db_engine = create_engine("postgresql://user:pw#host/database",
connect_args={"options": "-c statement_timeout=1000"})
with db_engine.connect() as db_connection:
print("got it")
Specifically, I get:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) ERROR: Unsupported startup parameter: options
You may have been connecting to those databases via PgBouncer.
If so, add ignore_startup_parameters = options to pgbouncer.ini under the [pgbouncer] section.
From https://www.pgbouncer.org/config.html#ignore_startup_parameters:
By default, PgBouncer allows only parameters it can keep track of in startup packets: client_encoding, datestyle, timezone and standard_conforming_strings. All others parameters will raise an error. To allow others parameters, they can be specified here, so that PgBouncer knows that they are handled by the admin and it can ignore them.
Default: empty
References:
https://github.com/pgbouncer/pgbouncer/issues/295
https://github.com/pgbouncer/pgbouncer/issues/496
When I trying to put citus cluster configuration value into postgresql.conf file:
citus.task_executor_type = "task-tracker"
I got the error:
service postgresql restart
Error: Invalid line 637 in /etc/postgresql/9.5/main/postgresql.conf: »citus.task_executor_type = "task-tracker"«
* No PostgreSQL clusters exist; see "man pg_createcluster"
Tell me please how can I set this configuration value by default?
I am not ready to run SET citus.task_executor_type TO "task-tracker" for each connection.
I am not sure but you are using double quote instead of single quote and this can be the problem. Try using single quote like:
citus.task_executor_type = 'task-tracker'
I've got an odd problem - on start of my sails app (which is connecting with postgres and deployed on heroku ) there are multiple connections (around 10) to database, and since it's free account, if I then try to launch app on localhost to test some new code I get an error "too many connections for a role". So does anyone know why there are so many connections to database and can I change it, to have only one connection per app?
EDIT:
Error creating a connection to Postgresql: error: too many connections for role
"xwoellnkvjcupt"
Error creating a connection to Postgresql: error: too many connections for role
"xwoellnkvjcupt"
error: Hook failed to load: orm (error: too many connections for role "xwoellnkv
jcupt")
error: Error encountered while loading Sails core!
error: error: too many connections for role "xwoellnkvjcupt"
at Connection.parseE (C:\Studia\szachman2\node_modules\sails-postgresql\node
_modules\pg\lib\connection.js:561:11)
at Connection.parseMessage (C:\Studia\szachman2\node_modules\sails-postgresq
l\node_modules\pg\lib\connection.js:390:17)
at null. (C:\Studia\szachman2\node_modules\sails-postgresql\node_
modules\pg\lib\connection.js:98:18)
at CleartextStream.EventEmitter.emit (events.js:95:17)
at CleartextStream. (_stream_readable.js:746:14)
at CleartextStream.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
at _stream_readable.js:401:7
at process._tickDomainCallback (node.js:459:13)
this is an error I am getting often when trying to test some new code on localhost.
#jantar #sgress454 I just added a troubleshooting message in sails-postgresql to try and make this better. Here's what it says:
-> Maybe your poolSize configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your poolSize set as something < 20. The default poolSize is 10.
To override the default poolSize, specify a poolSize property on the relevant Postgresql "connection" config object. If you're using Sails, this is generally located in config/connections.js, or wherever your environment-specific database configuration is set.
-> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured poolSize # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured poolSize (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of AVAILABLE connections to your Postgresql database (V), then you have problems. If this applies to you, try reducing your poolSize configuration. A reasonable poolSize setting would be V/N.
This is due to Sails's auto-migration feature which attempts to keep your models and database synced up. It's not intended to be used in production. You can turn auto-migration off on a single model by adding migrate: safe to the model definition:
module.exports = {
migrate: 'safe',
attributes: {...}
}
You can turn auto-migration off for all models by adding a model config, usually in your config/locals.js:
module.exports = {
model: {
migrate: 'safe'
},
environment: 'production',
...other local config...
}
A little update for the V1. Your adapter in config/datastore.js should look like this if you want to set a maximum size for the connection pool :
{
adapter: 'sails-postgresql',
url: 'yourconnectionurl',
max: 1 // This is the important part for poolSize, I set 1 because I don't want more than 1 connection ^^
}
If you want to know all infos you can set, look here : https://github.com/sailshq/machinepack-postgresql/blob/176413efeab90dc5099dc60718e8b520942ce3be/machines/create-manager.js , at line 162 :
// Basic:
'host', 'port', 'database', 'user', 'password', 'ssl',
// Advanced Client Config:
'application_name', 'fallback_application_name',
// General Pool Config:
'max', 'min', 'refreshIdle', 'idleTimeoutMillis',
// Advanced Pool Config:
// These should only be used if you know what you are doing.
// https://github.com/coopernurse/node-pool#documentation
'name', 'create', 'destroy', 'reapIntervalMillis', 'returnToHead',
'priorityRange', 'validate', 'validateAsync', 'log'
I just set up a replica set in Mongo (prod environment). I'm now getting a lot of exceptions like below (clipped).
I went into mongo and ran a serverStatus command on my primary mongo node and only have about 300 connections going, so it's hardly working.
Below are my connection option settings in my server code:
auto_connect_retry = false
connections_per_host = 10
threads_multiplier = 10
max_wait_time = 120000
connect_timeout = 10000
socket_timeout = 0
Do I have something mis-configured?
Sep 9, 2013 8:31:26 PM com.mongodb.DBPortPool gotError
WARNING: emptying DBPortPool to /10.0.8.10:27017 b/c of error
java.net.SocketException: Connection timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:146)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.bson.io.Bits.readFully(Bits.java:46)
at org.bson.io.Bits.readFully(Bits.java:33)
at org.bson.io.Bits.readFully(Bits.java:28)
at com.mongodb.Response.<init>(Response.java:40)
at com.mongodb.DBPort.go(DBPort.java:142)
at com.mongodb.DBPort.call(DBPort.java:92)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:244)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:216)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:288)
at com.mongodb.DBApiLayer$MyCollection.__find(DBApiLayer.java:273)
at com.mongodb.DBCollection.findOne(DBCollection.java:347)
at com.mongodb.DBCollection.findOne(DBCollection.java:332)
at com.mongodb.casbah.MongoCollectionBase$class.findOneByID(MongoCollection.scala:232)
at com.mongodb.casbah.MongoCollection.findOneByID(MongoCollection.scala:866)
at com.novus.salat.dao.SalatDAO.findOneById(SalatDAO.scala:353)
at com.novus.salat.dao.ModelCompanion$class.findOneById(ModelCompanion.scala:173)
Generally a connection timeout occurs from one of the following in a replica set
1) All members are not able to communicate with each other
2) A program is connecting to replica for update and it is unable to send it to primary due to overload or 1st as well
3) All relicas are not in sync and one is lagging behind too much
4) Leader election is going on but not completed due to some reason
Please check if your relica set is consistent and all nodes are working by issuing rs.status() on primary node , also as earlier suggested check primary logs for more information