We have a PostgreSQL DB on Heroku, without warning a few days ago it started logging all data changes as if the log_statement setting was set to mod which is having a massive impact on performance when we do our nightly batch updates.
I've checked the settings in the DB via the pg_settings view and using the Heroku CLI and it very clearly says the the log_statement setting is set to ddl.
We haven't made any changes to the DB or its settings. Does anyone know why this might be happening and how we can reset the log level?
I have migrated my data by dumping and reloading it from pg9.3 to pg11.4 which went fine. And I am able to run the new server with default config
The pg11.4 works fine but it fails to restart when I set in postgres.conf to
wal_level = minimal
This same setting works in my 9.3 instance but not in 11.4
Is there any config setting that could conflict with wal_level = minimal ?
It is no problem to start PostgreSQL v11 with wal_level=minimal unless there is another configuration setting that conflicts with it. For example, archive_mode cannot be on, and max_wal_senders has to be 0.
Look into the PostgreSQL log file for the error message, or start PostgreSQL manually with
pg_ctl start -D datadir
to see the message that will tell you the exact reason why the server failed to start.
When trying to connect to my Amazon PostgreSQL DB, I get the above error. With pgAdmin, I get "error saving properties".
I don't see why to connect to a server, I would do any write actions?
There are several reasons why you can get this error:
The PostgreSQL cluster is in recovery (or is a streaming replication standby). You can find out if that is the case by running
SELECT pg_is_in_recovery();
The parameter default_transaction_read_only is set to on. Diagnose with
SHOW default_transaction_read_only;
The current transaction has been started with
START TRANSACTION READ ONLY;
You can find out if that is the case using the undocumented parameter
SHOW transaction_read_only;
If you understand that, but still wonder why you are getting this error, since you are not aware that you attempted any data modifications, it would mean that the application that you use to connect tries to modify something (but pgAdmin shouldn't do that).
In that case, look into the log file to find out what statement causes the error.
This was a bug which is now fixed, Fix will be available in next release.
https://redmine.postgresql.org/issues/3973
If you want to try then you can use Nightly build and check: https://www.postgresql.org/ftp/pgadmin/pgadmin4/snapshots/2019-02-17/
I have an embedded database where I start an OServer and trying to connect to it from the console. I've been doing this successfully for many months and upgrading the database as new versions come out. Now, with 2.2.13, the embedded operations seem to work but I can't connect to the server with the 2.2.13 console.sh. I get the message:
Error: com.orientechnologies.orient.core.exception.OStorageException: Cannot create a connection to remote server address(es): [127.0.0.1:2424]
DB name="master"
The java code running the embedded database gets the following exception:
$ANSI{green {db=db}} Error executing request
com.orientechnologies.orient.core.exception.ODatabaseException: Error on plugin lookup: the server did not start correctly
DB name="db"
at com.orientechnologies.orient.server.OServer.getPlugin(OServer.java:850)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.openDatabase(ONetworkProtocolBinary.java:857)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.handshakeRequest(ONetworkProtocolBinary.java:229)
at com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary.execute(ONetworkProtocolBinary.java:194)
at com.orientechnologies.common.thread.OSoftThread.run(OSoftThread.java:77)
Seems to be looking for the 'cluster' plugin.
Any idea why this doesn't work anymore? It did work in 2.2.12.
Thanks
Curtis
Seems I had automatic backup turned on but the config file was missing. So, the server looked like it started up but actually didn't.
I created the config file and set enabled to false. Still didn't start up because it sees the false and stops the configuration and throws an exception because the 'delay' parameter isn't set.
I think orientdb should start up without backups enabled if the config file is missing or the enabled parameter is set to false.
At least the console is working now.
I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.