Is there a tool that will allow me to schedule Postgresql queries to run at regular intervals without being an Admin? I'm looking for solutions that would work on a Mac.
I only have write privileges (insert, update, delete) on certain schemas of the database but would like to schedule a query that runs on one of these schemas every day.
pgAgent is the obvious choice but I think I need to be an admin to use/install that.
Is there a tool that will allow me to schedule Postgresql queries to run at regular intervals without being an Admin? I'm looking for solutions that would work on a Mac.
Use pg_cron
INSERT INTO cron.job (schedule, command, nodename, nodeport, database, username)
VALUES ('0 4 * * *', 'VACUUM', 'worker-node-1', 5432, 'postgres', 'marco');
this requires you to install the extension as an admin.
You can also run crontab -e assuming it's supported by OSX. If you want to, as a regular user, set up a task to run (even non-DB tasks).
Related
I have a Postgres database defined with the public schema and it is accessed from within a Python application. Before doing a pg_dump, the tables can be accessed without using a qualified table name, for example select * from user works. After doing the pg_dump, select * from user fails with a relation "user" does not exist error, but select * from public.user works. No restore has been performed.
Since this is within an application, I cannot change the access to include the schema. The application uses sqlalchemy and pgbouncer for interacting with the database.
In trying to figure out what's happening, I've discovered that running pg_dump causes the session to change. Before running the command, by querying pg_stat_activity, I can see there are 10 sessions in the pool, one active and nine idle. After running the command, a different session is active and the other nine are idle. Also, the settings in pg_db_role_setting and pg_user look correct for the session that I can see. But, even when those look correct, the query select * from user fails.
Also, just for reference, the code currently does not contain pg_dump and runs fine. As soon as I add the pg_dump, I see the issues mentioned.
Is there anything in pg_dump that could be causing what I'm seeing or is it just a result of going to another session? Since these are some of the first commands being run after running migrations, is there any reason the sessions should have different settings? What else am I missing?
Thanks!
I'm running postgres on GCP SQL service.
I have a main and a read replica.
I've enabled pg_stat_statements on the main node but still I get messages that I have insufficient privileges for almost each and every row.
When i've tried to enable the extension on the read replica it gave me an error that: cannot execute CREATE EXTENSION in a read-only transaction.
All of those actions I have tried to do with the highest privilege user that I have (using a user who is a member of cloudsqlsuperuser, basically same as the default postgres user)
So I have 2 questions:
How do I fix the privileges issue so I can see the statistics in the table?
How do I enable extension on the read replica?
Thanks!
After having run some more tests on postgres 9.6, I have also obtained the messages <insufficient privilege>.
I have run the following query on both postgres 9.6 and 13 and obtained different results:
SELECT userid, usename, query
FROM pg_stat_statements
INNER JOIN pg_catalog.pg_user
ON userid = usesysid;
I noticed in postgres 9.6 that the queries I cannot see come from the roles/users cloudsqlagent and cloudsqladmin(preconfigured Cloud SQL postgres roles).
This does not happen with postgres 13 or better said versions 10 and higher and it is because when using EXTENSION pg_stat_statements, SQL statements from all users are visible to users with the cloudsqlsuperuser. This is the behavior of the product across different versions and it is described in the blue box of this link.
Basically only in version 9.6 the SQL statements from all users are NOT visible to users with the cloudsqlsuperuser role.
So if I enable it on the master, it should be enabled on the replica
as well?
Yes, after enabling the extension in the master you can connect to the replica and check with the following command that pg_stat_statements has been enabled:
SELECT * FROM pg_extension;
If you would like a more uniform behavior across postgres versions or if you strongly need the SQL statements from all users to be visible to the cloudsqlsuperuser role, I would recommend then creating a public issue tracker with the template feature request.
I hope you find this useful.
On the permissions side of things, cloudsqlsuperuser is not a real superuser (but is as close as you'll get in GCP cloudsql). Due to this I've sometimes found that I've needed to explicitly grant it access to objects / roles to be able to access things.
Therefore I'd try doing:
GRANT pg_read_all_stats TO cloudsqlsuperuser;
I'm not too sure about how to enable on the read replica unfortunately.
However, you might be interested in the recently released insights feature https://cloud.google.com/sql/docs/postgres/insights-overview - I haven't been able to play with this properly yet, but from what I've seen it's pretty nifty.
I need to run multiple user-defined SQL scripts - some using schema modification privileges, others only data modification privileges.
I can do this by executing them using different users (with adequate privileges), however I need to execute all scripts in single transaction.
Is there a way to specify privileges for single SQL statement on existing connection?
Thank you!
We have multiple PostgreSQL Instances in AWS RDS. We need to maintain an on-premise copy of each database to comply with our disaster recovery policy. I have been successful is using pg_dump and pg_restore to export the database schemas and tables to our on-premise server, but I have been unsuccessful in exporting the roles and tablespaces. I have found that this is only possible by using pg_dumpall, but as this requires super_user access, and that is not allowed in RDS, how can I export those aspects of the database to on our on-premise server?
My pg_dump command:
Pg_dump -h {AWS Endpoint} -U {Master Username}-p 5432 -F c -f C:\AWS_Backups\{filename}.dmp {database name}
My pg_restore command:
pg_restore -h {AWS Endpoint} -p 5432 -U {Master Username} -d {database name} {filename}.dmp
I have found multiple examples of people using pg_dump to export their PostgreSQL databases, however, they are not addressing the "Globals" that are ignored using pg_dump. Have I misread the documentation? After performing my pg_restore, my logins were not created on the database.
Any help you can provide on getting the FULL database (including globals) to our offsite location would be greatly appreciated.
UPDATE: My patch is now a part of Postgres v10+.
You can read about how this works here 3.
Earlier, I had also posted a working solution posted to my Github account. Then, you'd need to compile the binary and use that however, with the patch now a part of Postgres v10+, any pg_dumpall since that version now supports this feature.
You can read some more detailed inner workings here.
I haven't been able to find an answer to my question anywhere online. Just in case someone else may be experiencing this problem, I thought I would post a high-level outline of my "solution". I go around my elbow to get to my knee, but this is the option I have come up with:
Create a table (I created 2 - 1 for roles, and one for logins) in each PostgreSQL database within AWS. This table(s) will need to have all columns that you will need to dynamically create the SQL to do CREATE, GRANT, REVOKE, etc.
Insert all roles, logins, privileges, and permissions into this table. These are scattered everywhere, but here are the ones I used:
pg_auth_members (role and login relationships)
pg_roles (role and login permissions ie can login, inherit parent, etc)
information_schema.role_usage_grants (schema privileges)
information_schema.role_table_grants (table privileges)
information_schema.role_routine_grants (function privileges)
To fill in the gaps, there are clever queries on the web page below to use the built in functions to check for access. You will have to loop through the tables and process a row at a time
https://vibhorkumar.wordpress.com/2012/07/29/list-user-privileges-in-postgresqlppas-9-1/
Specifically, I used a variation of database_privs function
Once all of the data is in those tables, you can execute pg_dump, and it will extract that info from each database to your on-premise location. I did this through a Python script.
On your server, use the data in the tables to dynamically create the SQL statements needed to run the CREATE, GRANT, REVOKE, etc. scripts. Save in a .sql file that you can instruct a Python script to execute against the database and recreate the AWS roles and logins.
One thing I forgot to mention - because we are unable to access the pg_auth_id table in AWS, I have found no way to extract the passwords out of AWS. We are going to store these in a password manager, and when I create the CREATE ROLE statements, I'll pass a default to be updated.
I haven't completed the process, but it has taken me several days to track down a viable option to the absence of pg_dumpall's functionality. If anyone sees any flaws in my logic, or has a better solution, I'd love to read about it. Good luck!
I want to reduce the number of requirements to get started with my webapp. At the moment you need to run a "create database, create user, grant all" script before you can start debugging.
I'd like the code to be checked out and run straight away without requiring developers to have to read through lots of documentation and do lots of manual steps.
h2 allows you to specify a connection string and it will create the db if it doesn't already exist.
Is it possible to do that using PostgreSQL?
Or is my only option (to meet the requirements) to configure h2 for dev work and PostgreSQL for production?
A connection in Postgres is always to a particular database, but by default every install will have a postgres DB intended for running maintenance commands. The user will still need to supply some superuser login credentials, but assuming you have those, you can run your "create database, create user, grant all" script automatically when the webapp is first accessed.
For instance, have a generated config file which is ignored in source control; before loading the file, check if it exists; if it doesn't, run the install routine.
You can even load an HTML form allowing the user to provide the superuser credentials, choose a name for the DB, and any other commonly-changed configuration options. If these are all defaulted, the "manual step" is simply to glance that they are correct, and click "OK".