"Too many connections for role" error in Heroku (with Ebean) - postgresql

I'm using the configuration below for Ebean, so normally there shouldn't be more than 20 connections open, which is the limit for the Hobby-basic plan that I use in Heroku. Even so, Heroku is throwing the error: FATAL: too many connections for role... from time to time. Any clues?
db.default.partitionCount=1
db.default.maxConnectionsPerPartition=20
db.default.minConnectionsPerPartition=2
db.default.acquireIncrement=1
db.default.acquireRetryAttempts=3
db.default.acquireRetryDelay=30 seconds
db.default.connectionTimeout=30 seconds
db.default.idleMaxAge=5 minutes
db.default.idleConnectionTestPeriod=0
db.default.maxConnectionAge=15 minutes
db.default.initSQL="SELECT 1"
db.default.releaseHelperThreads=0

Related

ERROR: [dn3]: SSL connection has been closed unexpectedly

I have been working on the timescaleDB multinode clustering concept. When I go to add a data node, I run the add_data_node query in the access node at that time I got an error like SSL Connection has been closed unexpectedly
Config in the access node
postgresql.conf
listen_addresses = '*'
enable_partitionwise_aggregate = on
jit = off
Config in Data Node
Postgresql.conf
listen_addresses = '*'
max_prepared_transactions = 150
wal_level = logical
If you know the root cause of the problem let me know
While running the add_data_node query, the connection will be made. if any unexpected error will be thrown from the data_node service. it throws the error with the SSL connection has been closed unexpectedly.
To check the PostgreSQL database error log we need to enable two of the configurations.
log_connections = on
log_disconnections = on
Through the log what I have found is, in the postgresql.conf file, 'shared_preload_libraries' was with the empty string. I want to add the string 'timescaldb' for that variable (shared_preload_libraries).
I will not recommend this enabling log for production.
https://www.digitalocean.com/community/questions/how-can-i-investigate-postgres-managed-server-error-ssl-connection-has-been-closed-unexpectedly

postgresql pg_database_size throwing exception on random times

We are using Azure database for PostgreSQL ( Service ) for creating DB for each user when user register to the application ( less than 25 users databases right now ).
For reporting purpose we need information which each user's DB size.
To retrieve database size we have a Postgres function which fires the following query
SELECT pg_database.datname , pg_database_size(pg_database.datname) FROM
pg_database
We execute this function every hour throw azure function but at random time Postgres throw exceptions
Exception: Npgsql.PostgresException (0x80004005): 58P01: could not read directory "base/16452": No such file or directory at...
Exception remain same at most of the time with different directory or file location
Sometimes it also throws the exception
Exception: Npgsql.NpgsqlException (0x80004005): Exception while reading from stream ---> System.IO.IOException: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException
Working on the solution at the MSDN forums here.

error in postgres (connection limit exceeded for non-superusers)

We got an error from postgres one day.
"connection limit exceeded for non-superusers"
In postgresql.conf setting, the max_connection is 100.
At that time I checked the access with command (select * from pg_stat_activity;)
the result only 17 access.
We used this application for almost 10 years and never done any changed.
This is the first time we received this kind of error.
So, I assume that "not closing the connections properly in program "
is not the cause of this error.
Any tips?

Powercenter SQL1224N error connecting DB2

Im running a workflow in powercenter that is constatnly getting an SQL1224N error.
This process execute a query against one table (POLIZA) with 800k rows, it retrieves the first 10k rows and then it start to execute to another table with 75M rows, at ths moment in DB2 an idle thread error appear but the PWC process still running retrieving the 75M rows, when it is completed (after 20 minutes) the errros comes up related with the first table:
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
Database driver error...
Function Name : Fetch
SQL Stmt : SELECT POLIZA.BSPOL_BSCODCIA, POLIZA.BSPOL_BSRAMOCO
FROM POLIZA
WHERE
EXA01.POLIZA.BSPOL_IDEMPR='0015' for read only with ur
Native error code = -1224
DB2 Fatal Error].
I have a similar process runing against the same 2 tables and it is woking fine where the only difference I can see is that the DB2 user is different.
Any idea how can i fix this?
Regards
The common causes for -1224 are:
Your instance or database has crashed, or
Something/somebody is forcing off your application (FORCE APPLICATION or equivalent)
As for the crash, I think you would know by know. This typically requires a database or instance restart. At any rate, can you please have a look into your DIAGPATH to check for any FODC* directories whose timestamp would match the timestamp of the -1224 errors?
As for the FORCE case, you should find some evidence of the -1224 in db2diag.log. Try searching for the decimal -1224, but also for its hex representation (0xFFFFFB38).

"Lost connection to MySQL server during query" in Google Cloud SQL

I am having a weird, recurring but not constant, error where I get "2013, 'Lost connection to MySQL server during query'". These are the premises:
a Python app runs around 15-20minutes every hour and then stops (hourly scheduled by cron)
the app is on a GCE n1-highcpu-2 instance, the db is on a D1 with a per package pricing plan and the following mysql flags
max_allowed_packet 1073741824
slow_query_log on
log_output TABLE
log_queries_not_using_indexes on
the database is accessed only by this app and this app only so the usage is the same, around 20 consecutive minutes per hour and then nothing at all for the other 40 minutes
the first query it does is
SELECT users.user_id, users.access_token, users.access_token_secret, users.screen_name, metadata.last_id
FROM users
LEFT OUTER JOIN metadata ON users.user_id = metadata.user_id
WHERE users.enabled = 1
the above query joins two tables that are each around 700 lines longs and do not have indexes
after this query (which takes 0.2 seconds when it runs without problems) the app starts without any issues
Looking at the logs I see that each time this error presents itself the interval between the start of the query and the error is 15 minutes.
I've also enabled the slow query log and those query are registered like this:
start_time: 2014-10-27 13:19:04
query_time: 00:00:00
lock_time: 00:00:00
rows_sent: 760
rows_examined: 1514
db: foobar
last_insert_id: 0
insert_id: 0
server_id: 1234567
sql_text: ...
Any ideas?
If your connection is idle for the 15 minute gap the you are probably seeing GCE disconnect your idle TCP connection, as described at https://cloud.google.com/compute/docs/troubleshooting#communicatewithinternet. Try the workaround that page suggests:
sudo /sbin/sysctl -w net.ipv4.tcp_keepalive_time=60 net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=5
(You may need to put this configuration into /etc/sysctl.conf to make it permanent)