I'm writing a function that kills all current operations running on the server, however, pg_terminate_backend only allows superusers to execute it. I've searched around but have failed to find a concrete solution or lead regarding this.
Is it actually possible granting a certain user permission to execute it without making that user a superuser?
Furthermore, I've managed to find some information about people using security definers. I've never used those and didn't really find a use-case of it with pg_terminate.
Anyone ever had any experience with this or knows whether this is possible?
In recent PostgreSQL versions you could grant the user the pg_signal_backend role.
In an antediluvial version like 8.0, you'd have to write a SECURITY DEFINER function owned by a superuser, REVOKE EXECUTE on the function from PUBLIC and grant it to the user that needs to terminate backends.
Related
I would like to develop a specific app that can be used to access a database developed in PostgreSQL. The app performs calculations and asks for the required data from the database server.
The user can download the app from a website if he has registered. After starting the app, the user has to log in to be able to use it.
Now the question:
What would be the most sensible solution in this example?
To be honest, I don't want to create a separate role for each user.
My idea is that the app only accesses the database via a general role, for example with the name "usership". With this role, a user only has well-defined read access. It is possible that users should also be able to save their own settings or measured values under their user name in certain tables. Access would then only be possible with the correct user name and password, which are specified with each operation on the relevant tables (however, this effort would not be necessary for read-only access to other tables with general data).
The question is whether there are any limits to how many apps can communicate with the database at the same time via the same database credentials / username "usership".
I don't want to have to create a separate DB role for each customer. Somehow that doesn't seem right to me, if only because adding new employees or deleting them means major interventions in the database schema (create / drop role). Basically, the app should do nothing else than a website where several users are logged in at the same time, the only difference being that the app does not run in the browser and everything works either on the client side at the application level or on the database server.
I'm not aware of any limits on sharing of usernames + passwords in postgres. You can have hundreds or thousands of concurrent connections using the same username + password.
There can be issues with many hundreds or thousands of concurrent connections, depending on your database hardware, especially ram.
While Postgres supports thousands of concurrent connections in theory, in practice I've run into memory issues as the # of open connections increases. If this is a problem and a large % of your connections are idle at any one moment, you can add a layer of connection pooling with something like pgbouncer, but keep in mind that adds another process to monitor.
In general, however, I wouldn't recommend this approach. You'd be providing direct, essentially anonymous access to your shared database. I expect it would be difficult to secure your database credentials in the client, and with direct access it should be fairly easy to construct SQL queries that would take down your database server. This would be difficult to monitor or prevent against since all users would be the same and you'd have no way to revoke access in case of abuse (without changing the password for everyone that has access).
From a security standpoint I'd definitely recommend being able to identify your users, monitor their usage separately and revoke access individually. I don't know of any performance issues with having many thousands of separate postgres users/credentials.
-- Scalability --
Using a postgres cluster with read replicas and load balancing (e.g. https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/) you should be able to scale this horizontally fairly easily if the need arises.
I have a Redshift database in my company (not in my power to change that) and recently some data just desapear. I thinked in do some kind of trigger to identify when any delete happen and try to found the source, but I learn Redshift don't have trigger. There are any opcions for monitoring what user and when delete from database?
Ideally you should be having different users or roles for each and every process or client connecting to redshift. Use grants to solve/debug this problem.
Then you should be using grant to provide DELETE grants to some specific user/users or roles.
Also, there is sql_history table that you could query to see which user has issued delete query.
I hope it will help.
If the DB2 uses OS authentication and I delete a DB2 user at the OS level, what will be the impact? Will the DB2 still work fine, and will those privileges that I granted to the user still available after the user is created back?
When asking for help with Db2 please mention your Db2-server platform (Z/os , i-series, Linux/Unix/Windows). The reason is that the answer be different per platform. There are also special tags for your question that you can use to indicate the Db2-platform (db2-zos, db2-400, db2-luw).
If you remove the operating system user the impact is that user can no longer connect to the Db2-database(s) . But any GRANTS that were previously created and stored inside the database(s) will remain unchanged (unless something REVOKES them), even if they will not be used after all pre-existing connections by that removed-operating-system-user are terminated.
For Db2-Linux/Unix/Windows, if you recreate the user in the operating system the previous GRANTS will reapply only if they are still present inside the database and the user successfully reconnects. This behaviour may be different on other platforms.
If the Db2-server is configured with special plugins for security, or uses LDAP or other external tooling then the answer can also be different.
I would like the ability to protect against the deletion of a cloud SQL instance. This seems like a good step to take to avoid actions from an angry employee or a regretful click.
Google added a deletion protection flag for Cloud SQL in August 2022.
https://cloud.google.com/sql/docs/mysql/deletion-protection
I couldn't find anything like literally protecting the instance vs deletion, but, you could use the predefined roles in your instance to try to protect your instances from, as you said, angry employees.
For example:
Keeping the role owner to yourself (assuming you are, indeed, the owner of this project).
Depending on the needs of the employees, you can probably assign them the role cloudsql.editor or similar. If this is too much, you can create your own custom roles to narrow down what you need.
As for a regretful click, there is no much you can do. You could regularly create an export and save it on one of your buckets, just in case you need to create again your instance after a 'regretful' click.
Well, terraform certainly seems to have added some kind of deletion protection on the GCP sql instance. When I try to "terraform destroy" , I get this error
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
Perhaps this functionality was added after the OP had reported the issue - which is quite possible given how old this thread is.
A related issue which talks about this.
Somebody told me it could be possible if i use pgcrypto pgp_pub_decrypt() function with my queries the logs will reveal the secret key used to decrypt my data
SELECT pgp_pub_decrypt(string_to_decrypt, private_key)
I can not check that because i dont have access to the logs, but if this is true, from my point of view this should be considered as a security issue. Is this true?
Yes, this is true. Assuming you are logging statements.
It is also probably visible in pg_stat_activity to a superuser who is looking at the right moment.
Of course, a superuser could also install an extension to secretly log just crypto-related function calls, that's the nature of having superuser access.
If you don't control the server your database is running on and don't trust the people who do, then you shouldn't store sensitive information in it.