This question already has answers here:
How to increase the max connections in postgres?
(5 answers)
Closed 3 months ago.
The community reviewed whether to reopen this question 2 months ago and left it closed:
Original close reason(s) were not resolved
I keep getting remaining connection slots are reserved for non-replication superuser connections errors when I run postgres:latest in Docker with Docker Compose.
How can I increase the maximum connections Postgres allows with Docker-Compose?
Note: This is not about Postgres alone, but rather how to pass the values to the official Postgres image on Docker.
You can change the configuration file used by your postgresql image: https://docs.docker.com/samples/library/postgres/#database-configuration
The parameter you want to change is: max_connections, you will find more information about it here : How to increase the max connections in postgres?
Related
This question already has answers here:
Query to check postgresql database status
(2 answers)
Closed 4 years ago.
I have a Postgres database in the production environment for which I don't have access to. All I want to do is to check if the database is up and running, Is there any command or program or anything to check this? Just need to verify if it's up as easy as possible. I know the password, the database, the host/servername and the database accountname
How do I use all of these parameters to check if the database is up and running. The guys that configured the production environment have made so that no one outside the production environment can touch it. The production environment is Linux and iam using windows in my virtual desktop
You can check for a connection on the default port, which the Postgres service is running.
Some Administrators use default ports for database services:
Try checking 5432 or 5433 as the default ports.
If you have Win and you're looking also for solution, then you may check some dedicated monitoring software like NetCrunch.
This question already has answers here:
The infamous java.sql.SQLException: No suitable driver found
(21 answers)
Matlab and MySQL no suitable driver found
(2 answers)
Closed 3 years ago.
I am trying to connect to a localhost setup PostgreSQL 10 with Matlab R2015a and after following the instructions and connection string layout, I'm at a loss to explain why I'm still getting a "No suitable driver found" error.
datasource = 'toronto';
username = 'postgres';
password = '********';
driver = 'org.postgresql.Driver';
server = 'jdbc:postresql://localhost:5432/';
connection = database(datasource, username, password, driver, server)
I've checked this related SO thread but no dice. Here's some extra information, hopefully someone has come across this before.
PostgreSQL 10.1, build 1800 64-bit
Java Version 8, built 1.8.0_91-b14
PostgreSQL JDBC 4.2 Driver, 42.1.4
This question already has answers here:
Too many open files while ensure index mongo
(3 answers)
Closed 5 years ago.
Trying to move a MongoDB database with a little over 100 million documents. Moving it from server in AWS to server in GCP. Tried mongodump - which worked, but mongorestore keeps breaking with an error -
error running create command: 24: Too many open files
How can this be done?
Don't want to transfer by creating a script on AWS server to fetch each document and push to an API endpoint on GCP server because it will take too long.
Edit (adding more details)
Already tried setting ulimit -n to unlimited. Doesn't work as GCP has a hardcoded limit that cannot be modified.
Looks like you are hitting the ulimit for your user. This is likely a function of some or all of the following:
Your user having the default ulimit (probably 256 or 1024 depending on the OS)
The size of your DB, MongoDB's use of memory mapped files can result in a large number of open files during the restore process
The way in which you are running mongorestore can increase the concurrency thereby increasing the number of file handles which are open concurrently
You can address the number of open files allowed for your user by invoking ulimit -n <some number> to increase the limit for your current shell. The number you choose cannot exceed the hard limit configured on your host. You can also change the ulimit permanently, more details here. This is the root cause fix but it is possible that your ability to change the ulimit is constrained by AWS so you might want to look at reducing the concurrency of your mongorestore process by tweaking the following settings:
--numParallelCollections int
Default: 4
Number of collections mongorestore should restore in parallel.
--numInsertionWorkersPerCollection int
Default: 1
Specifies the number of insertion workers to run concurrently per collection.
If you have chosen values for these other than 1 then you could reduce the concurrency (and hence the number of concurrently open file handles) by setting them as follows:
--numParallelCollections=1 --numInsertionWorkersPerCollection=1
Naturally, this will increase the run time of the restore process but it might allow you to sneak under the currently configured ulimit. Although, just to reiterate; the root cause fix is to increase the ulimit.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to connect to my remote postgresql database using pgAdminIII. I am able to connect to the database server via command line using psql client. But when I try to connect using pgAdminIII 1.16, I get the following error :
ERROR: ACL arrays must be one dimensional.
I have checked hba_conf entries. The same entries worked for another database server.
pg_hba is not relevant. ACL arrays are used to store privileges for database objects (database, schema, table, sequence, view, function, and so on).
So the problem is that either:
You have some weird data in one of ACLs
pgAdmin has a bug
Solution would be to:
Enable logging of all queries in remote database (for example: log_statement = all, or log_min_duration_statement = 0)
start pgadmin3, and let it connect, and error out
check in Pg logs what was the last query pgadmin issued, as it is likely the problem was with data from last query
analyze the data using psql connection, and either fix data in db, or report bug in pgadmin
This question already has answers here:
Reducing MongoDB database file size
(16 answers)
Closed 5 years ago.
I have a database in MongoDB, called dump. Currently, it reached 6GB in my server. I decided to delete 90% of data in this database to reduce the disk space it occupied. But after doing that its size is still 6GB, while the true storage size is only 250MB.
I guess this is the design of MongoDB? Is there any convinient way to reduce its size? Thank in advance.
Try (source):
$ mongo mydb
> db.repairDatabase()
To compress the data files, you can run either start up MongoDB with mongod --repair, or connect to the database through the shell and run db.repairDatabase().
There's also a new compact command scheduled for v1.9 that will do in-place compaction.