We have a PostgreSQL database with PostGIS running and today we ran into the issue that too less connections were available. Mostly we are using QGIS to access the database. We realized that issue because multiple users got the following error:
FATAL: remaining connection slots are reserved for non-replication superuser connections
When checking the number of connections in pgAdmin I realized a thing I saw before, but as I never ran into problems didn't care too much about.
QGIS creates multiple connections to PostgreSQL for the same user to the same database.
Now I am wondering why this is the case and how I can maybe change that behaviour.
Could this happen for example if a person got access rights to a database through different user groups?
One approach might be the issue that some users run into that if you add layers to a QGIS project that was created before might ask you multiple times for your login credentials if those changed. This seem to me that probably different credentials are saved with the project and therefor multiple connections might be used. Can anyone confirm or dispruve this? - Suggestions for a test scenario are also welcome to check this.
Any ideas, hints or soutions are welcome.
By the way: Yes we increase the number of max_connections, but I want to understand why this happens and get closer to the core of the situation.
Related
Edit: so we never obtained the answer, instead we tried it with the server and client on the same PC. This morning I tried logging into the client and it would hang on "authenticating". So I shut everything down and went back to my life for a bit. This evenng I tried again, and now it says account or password incorrect. We tried recreating the accunt and password, same thing. We made a new account and password, same thing. Any help would be appreciated. Both on this new issue and the first one below (because eventually I want us both to be able to play on our own computers). TIA
My son and I have been following the guide. We did fine until we got to the part that says: "Open the acore_auth database and find the realmlist table. You need to edit the address field according to your needs". We have searched through everything in acore_auth in Heidi and we have not been able to find where the realmlist table is, let alone how to edit it. We are using one dedicated computer for the server, and we will join the server via our laptops. Any help would be greatly appreciated. The guide has been very detailed thus far, thank you. My son is almost done with his AS in Computer Programming, with an emphasis on game design, not networking.
Upon further inspection... we discovered our acore_auth database is exmpty. All the Acore_? databases are empty. What did we miss? Any ideas?
You have to edit the table realmlist of the acore_auth database and change the address field with the LAN address of the PC where the server is running:
Reading your question it sounds much that you are trying to run AC on Windows, so I recommend reading the installation guide here https://www.azerothcore.org/wiki/installation#azerothcore-classic-setup
Which also includes installation for macOS and Linux if you happen to use them.
In short to populate your database for the first time you need to run your Authserver and Worldserver applications which will automatically import the base and update files into your database. Then you go to acore_auth.realmlist and specify the address field to LAN IP (192.168.x.x) for the host PC to allow other PCs in your network to connect to it.
I would like to develop a specific app that can be used to access a database developed in PostgreSQL. The app performs calculations and asks for the required data from the database server.
The user can download the app from a website if he has registered. After starting the app, the user has to log in to be able to use it.
Now the question:
What would be the most sensible solution in this example?
To be honest, I don't want to create a separate role for each user.
My idea is that the app only accesses the database via a general role, for example with the name "usership". With this role, a user only has well-defined read access. It is possible that users should also be able to save their own settings or measured values under their user name in certain tables. Access would then only be possible with the correct user name and password, which are specified with each operation on the relevant tables (however, this effort would not be necessary for read-only access to other tables with general data).
The question is whether there are any limits to how many apps can communicate with the database at the same time via the same database credentials / username "usership".
I don't want to have to create a separate DB role for each customer. Somehow that doesn't seem right to me, if only because adding new employees or deleting them means major interventions in the database schema (create / drop role). Basically, the app should do nothing else than a website where several users are logged in at the same time, the only difference being that the app does not run in the browser and everything works either on the client side at the application level or on the database server.
I'm not aware of any limits on sharing of usernames + passwords in postgres. You can have hundreds or thousands of concurrent connections using the same username + password.
There can be issues with many hundreds or thousands of concurrent connections, depending on your database hardware, especially ram.
While Postgres supports thousands of concurrent connections in theory, in practice I've run into memory issues as the # of open connections increases. If this is a problem and a large % of your connections are idle at any one moment, you can add a layer of connection pooling with something like pgbouncer, but keep in mind that adds another process to monitor.
In general, however, I wouldn't recommend this approach. You'd be providing direct, essentially anonymous access to your shared database. I expect it would be difficult to secure your database credentials in the client, and with direct access it should be fairly easy to construct SQL queries that would take down your database server. This would be difficult to monitor or prevent against since all users would be the same and you'd have no way to revoke access in case of abuse (without changing the password for everyone that has access).
From a security standpoint I'd definitely recommend being able to identify your users, monitor their usage separately and revoke access individually. I don't know of any performance issues with having many thousands of separate postgres users/credentials.
-- Scalability --
Using a postgres cluster with read replicas and load balancing (e.g. https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/) you should be able to scale this horizontally fairly easily if the need arises.
I am having a big problem, quite difficult to find/search.
I have a server in Ubuntu, where inside that server I have installed:
GITLAB (have all proyect)
POSTGRESSQL (Independent gitlab database is used for a personal project)
TOMCAT with APP WEB (Springboot, this use postgres)
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
I am having various problems:
This server is still for testing, it is used for specific specific things (I mean, its use and access is limited and controlled)
Very frequently, almost every day, the user postgres from the postgresql server "erases" the password. Without anyone doing it manually, "it happens exponentially". I notice why the application stops responding, and then I access postgresql and note that the postgres user has no password.
I looked for many places, and I can't find anything. I really don't know where else to look. If someone passed it to you or has information about it, I would be grateful if you could provide it to me.
------More information added----------
I was looking at the postgres logs, before I have no authentication and I see this.
There are times when no one could have been using the springboot server,
--2020-01-17 00:30:21.286
And also the two log that show before that moment. Could it be something that is deleting my password?
Thank you.
PostgreSQL does not randomly delete its own passwords, and I really doubt Tomcat or Gitlab do either. Indeed they shouldn't even have access to the server as the 'postgres' user or any other superuser, and so shouldn't be able to even if they wanted.
It seems like that there is an intruder in your system. After gaining access they create their own user with their own password. Then disabling your normal superuser from logging on is a common way to try to prevent you from regaining control and kicking them out. Do any users exist that you do not recognize?
The bit of the log file you posted clearly shows someone trying to guess your password, starting at 2:58. You aren't logging IP addresses (%h) so it doesn't show where they are coming from. It doesn't show that they succeed, but unless you have log_connections = on, it wouldn't show successes.
I have a MongoDB client in three EC2 instances and I have created a replica set. Last time I had a problem, of space constraint which stopped my mongod process, thereby halting the application and now in an instance couple of days back, some of my tables were gone from database, so I set logging and all to my database just to catch if anything like that happens again. In a fresh incident this morning I was unable to login to my system and that's when I found out that whole database was empty. I checked other SO question like this which suggest setting up a TTL.Which I haven't done at all.
Now how do I debug this situation and do a proper root cause analysis? I can't even find anything in my debug logs as well. The tables just vanished. How do I set up proper logging mechanism and how do I ensure that all my tables are never ever deleted again?
Today I got a mail from Amazon that I was probably running an unsecured version of MongoDB and that may have caused this issue. So who ever is facing this issue please go through the Security Checklist Provided by MongoDB. There are some points that are absolutely necessary in there.
1. Enable Access Control and Enforce Authentication
2. Encrypt Communication
3. Limit Network Exposure
These three are the core and depending upon how many people access your database you can Configure Role-Based Access Control.
These are all the things I have done. Before this incident I had not taken security that seriously but after I was hit by it. I made sure I have all the necessary precautions in place.
Hope this helps someone.
In order to secure our database we create a schema for each new customer. We then create a user for this schema and when a customer logs in via the web we use their user and hence prevent them gaining access to other areas of the database.
Our issue is with connection pooling as it is a bit inefficient to keep creating/dropping new connections for these users. We would like to have a solution that can work across many hundreds of different database users.
We've looked at pg_bouncer, but the issue here is that we have to create a text record in an ini file for each user and restart pg_bouncer every time we set up a customer. This is not a great solution.
Is there an alternative solution that works in real time and would mean a customers connection/connection(s) would stay in the pool whilst they were active?
According to the latest release notes pgbouncer might actually do this. But I haven't tried.
Pooling mode can be configured both per-database and per-user.
As for use case in general. We also had this kind of issue a while ago. We just went with connection pooling with one user/database and multiple schemas. Before running psql query we just used SET search_path TO schemaName. As for logging, we had compliance mode, when we could log activity per customer and save it in appropriate schema.