Related
I was trying to figure-out how we can schedule to refresh the materialized view on azure postgres database single server which is in azure cloud, one solution is to use pg_cron extension, but it
seems it is only available on azure flexible postgres database server and not on azure postgres database single server, I did not get any other option available, any suggestion in this regard will be really helpful.
I did not find any postgres scheduler extension for the db hosted on Azure, so created one microservice to schedule the db functions.
Example Link
I am trying to install pg-cron extension for Azure PostgreSQL Flexible server.
According to documentation found here:
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-extensions#postgres-13-extensions
pg_cron is available extension, but when I am trying to install it:
create schema cron_pg;
CREATE EXTENSION pg_cron SCHEMA cron_pg;
What I get is:
SQL Error [0A000]: ERROR: extension "pg_cron" is not allow-listed for "azure_pg_admin" users in Azure Database for PostgreSQL
Hint: to see the full allow list of extensions, please run: "show azure.extensions;"
When executing:
show azure.extensions;
pg_cron is missing:
address_standardizer,address_standardizer_data_us,amcheck,bloom,btree_gin,btree_gist,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,lo,ltree,pageinspect,pg_buffercache,pg_freespacemap,pg_partman,pg_prewarm,pg_stat_statements,pg_trgm,pg_visibility,pgaudit,pgcrypto,pgrowlocks,pglogical,pgstattuple,plpgsql,postgis,postgis_sfcgal,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,tsm_system_rows,tsm_system_time,unaccent,uuid-ossp,lo,postgis_raster
What am I doing wrong?
You can tell pg_cron to run jobs in another database by updating the database column job in the jobs table.
For example:
UPDATE cron.job SET database = 'wordpress' WHERE jobname = 'wordpress-job';
Pretty late but this issue showed up when I was searching for same problem but with pg_trgm extension. After some looking around eventually realised you just need to update the database settings.
Go to Database in Azure Portal, then to Server parameters and search for azure.extensions parameter. You can then click on the list and enable/disable desired extensions (PG_CRON is available), the server will restart on save and then you will be able to enable the extensions in database.
Seems that the pg_cron extension is already enabled, by default, in the default 'postgres' database.
The reason why I was not seeing this is because I am not using the default 'postgres' database. I have created my own DB which I was connected to.
This actually does not resolve my problem, because I can't execute jobs from pg_cron across databases...
pg_cron jobs all fail with "role of....doesn't provide permission to schedule a job"
I'm working on getting pg_partman and pg_cron set up on RDS, but when my pg_cron jobs run, they return this error:
ERROR: The protected role of rds_super doesn't provide permission to schedule a job.
From the error text, it seems like I'm missing a simple permissions issue on something in the directory or resources holding pg_cron, but I can't find the source of the problem. And, it's possibly something else. A lot of Googling, hunting through sources, and trial-and-error hasn't lead me to any answers, and I'm hoping for help.
For background, this is Postgres 13.4 with pg_cron 1.3. These are the latest versions now available on RDS. My goal is to have pg_cron running jobs in various databases in this cluster, but I've reduced the problem example to the cron schema in postgres.
RDS defines a role named rds_superuser that doesn't have a login, which you can then grant to other users. We're using a custom role named rds_super, and have for years.
When you create extension pg_cron up on RDS, the install is into the postgres database by default, and it creates a new schema named cron. That's all fine. As a "hello world" version of the problem, here's a simple table and task to insert the current time every minute into a text field.
DROP TABLE IF EXISTS cron.foo;
CREATE TABLE IF NOT EXISTS cron.foo (
bar text
);
GRANT INSERT ON TABLE cron.foo TO rds_super;
INSERT INTO cron.foo VALUES (now()::text);
select * from cron.foo;
-- Run every minute
SELECT cron.schedule('postgres.populate.foo','*/1 * * * *',
$$INSERT INTO cron.foo VALUES (now()::text) $$);
The bare statement INSERT INTO cron.foo VALUES (now()::text) works fine, when connected directly as the rds_super user. But when it's executed through the cron.job defined above, the cron.job_run_details output has the right code, the expected user, but a failure result with this error:
ERROR: The protected role of rds_super doesn't provide permission to schedule a job.
Does this ring a bell for anyone? I've deleted, reinstalled, set permissions explicitly. No improvement.
Public
This may be off, but I ran into a couple of things where it looked like I needed to provide access to public. I started in PG 9.4 or 9.5, couldn't get my head around securing public...and stripped all of the rights off it everywhere. Putting some back in may be needed here?
Permissions checks
Here are the permissions checks that I could think of.
select grantor,
grantee,
table_schema,
table_name,
string_agg (privilege_type, ',' order by privilege_type) as grants
from information_schema.role_table_grants
where table_catalog = 'postgres'
and table_schema = 'cron'
and grantee = 'rds_super'
group by 1,2,3,4
order by 1,2,3,4;
I gave the user all privileges on all of the tables, just to see if that cleared things up. No joy.
grantor grantee table_schema table_name grants
rds_super rds_super cron foo DELETE,INSERT,REFERENCES,SELECT,TRIGGER,TRUNCATE,UPDATE
rds_super rds_super cron job_run_details_plus DELETE,INSERT,REFERENCES,SELECT,TRIGGER,TRUNCATE,UPDATE
rds_superuser rds_super cron job DELETE,INSERT,REFERENCES,SELECT,TRIGGER,TRUNCATE,UPDATE
rds_superuser rds_super cron job_run_details DELETE,INSERT,REFERENCES,SELECT,TRIGGER,TRUNCATE,UPDATE
Nothing is obviously wrong with the schema rights:
select pg_catalog.has_schema_privilege('rds_super', 'cron', 'CREATE') AS create,
pg_catalog.has_schema_privilege('rds_super', 'cron', 'USAGE') AS usage;
create usage
t t
Likewise, nothing pops out when I check function execution rights:
select proname, proargnames
from pg_proc
where has_function_privilege('rds_super',oid,'execute')
and pronamespace::regnamespace::text = 'cron'
order by 1,2
proname proargnames
job_cache_invalidate
schedule {job_name,schedule,command}
schedule {schedule,command}
unschedule {job_id}
unschedule {job_name}
Answer
I do not know if this is the answer, but it seems to have fixed my problem. Spoiler: Log in as the user the pg_cron scheduler background worker runs as.
I burned everything down and restarted, and then found that my jobs simply would not run. No error, no results. I checked the status of the background workers like this:
select application_name,
usename,
backend_type,
query,
state,
wait_event_type,
age(now(),backend_start) as backend_start_age,
age(now(),query_start) as query_start_age,
age(now(),state_change) state_change_age
from pg_stat_activity
where backend_type != 'client backend';
I noticed that the background worker had been running for over a day (it's loaded as a shared library), and seemed to be stuck. I rebooted the server, and redid everything logged in as dbadmin, instead of my custom user. That's the user name that the pg_cron scheduler process is running as, in this case. I don't remember if dbadmin is part of the package with RDS Postgres, or if it's something I added years back. There's nothing in the RDS pg_cron instructions about this, so maybe it's just me. I needed to set up its search_path and permissions a bit to get everything working the way I needed, but that's normal.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL_pg_cron.html
At least in my case, the answer is to run the jobs as the same user as the pg_cron background thread. I've posted more details to the end of the original question.
I have Pgagent installed on my Debian OS. Along with Postgresql 9.4.
I have checked .pgpass file as this seems to be the most common cause for a job to not run.
host port 5432 database = * username = postgres password = xxxx.
for both local and the remote host. The database I'm trying to set a job for is on a remote host.
I made sure it was enabled. It's just a simple INSERT script that should repeat every 5 minutes.
No errors are being triggered that I can find. Any ideas of what would cause the job not to run at all - even when selecting 'run now'?
Check postgre db, pgAgent Catalog, pga_jobsteplog
IDK about Linux but I had similar problem in windows where the thing won't run and it doesn't raise any notice on the error even after doing RUN NOW. The only error i could find out was that if i click on the job and click on statistics, i could see like shit ton of times it ran and everytime it ran, its status was F.
The reason for this failure is becuase the pgagent couldn't connect to the main database of postgresql.
The services of pgagent isn't running at all (as we can see this information under services in task manager in windows).
Forcing the service to run would create a failure which can be viewed in the event manager in windows.
To solve this issue, first try putting that pgpass.txt file in the environment variable (if not automatically put), if this didn't work, then what I did was to uninstall and delete all possible folders of Postgres, pgagent, and pgadmin, clearing out all temp files, clearing out registry details which have been put by Postgres, pgagent, and pgadmin and also from environment variable. Then reinstall it and it would normally work :)
I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.