I have a Postgres database defined with the public schema and it is accessed from within a Python application. Before doing a pg_dump, the tables can be accessed without using a qualified table name, for example select * from user works. After doing the pg_dump, select * from user fails with a relation "user" does not exist error, but select * from public.user works. No restore has been performed.
Since this is within an application, I cannot change the access to include the schema. The application uses sqlalchemy and pgbouncer for interacting with the database.
In trying to figure out what's happening, I've discovered that running pg_dump causes the session to change. Before running the command, by querying pg_stat_activity, I can see there are 10 sessions in the pool, one active and nine idle. After running the command, a different session is active and the other nine are idle. Also, the settings in pg_db_role_setting and pg_user look correct for the session that I can see. But, even when those look correct, the query select * from user fails.
Also, just for reference, the code currently does not contain pg_dump and runs fine. As soon as I add the pg_dump, I see the issues mentioned.
Is there anything in pg_dump that could be causing what I'm seeing or is it just a result of going to another session? Since these are some of the first commands being run after running migrations, is there any reason the sessions should have different settings? What else am I missing?
Thanks!
Related
Original aim: rename a database using ALTER DATABASE via psql.
Problem: rename fails due to other sessions accessing target database. ・All terminals/applications I am aware of have been closed.
・querying pg_stat_activity shows that there are 10 processes(=sessions?) accessing the db.・The username for each session is the same user I have been using for psql and for some local phoenix and django apps. The client_addr is also local host for all of them.
・When I use pg_terminate_backend, on any of the pids, another process gets immediately spawned.
・After restarting my pc, 10 processes are again spawned.
Concern: As I can't account for these 10 processes that I can't get rid of, I think I'm misunderstanding how postgres works somewhere.
Question: Why do 10 session/processes connected to a particular one of my databases, and why can't I terminate them using pg_terminate_backend?
Note: In the phoenix project I set up recently, I set the and set the pool_size of the Repo config to 10 - which makes me think it's related...but I'm pretty sure that project isn't running in any way.
Update - Solved
As a_horse_with_no_name suggested, the by doing the following I was able to put a stop to the 10 mystery sessions.
(1) prevent login of user responsible for the sessions (identifiable by querying `pg_stat_activity`), by doing `alter user .... with nologin`
(2)-running pg_terminate_backend on each of the session's pids.
After those steps I was able to change the table name.
The remaining puzzle is, how did those sessions get in that status in the first place... from the contents of pg_stat_activity, the wait_event value for each was clientRead.
From this post, it seems that the application may have been forcibly stopped halfway through a transaction or something, leaving postgres hanging.
I'm running postgres on GCP SQL service.
I have a main and a read replica.
I've enabled pg_stat_statements on the main node but still I get messages that I have insufficient privileges for almost each and every row.
When i've tried to enable the extension on the read replica it gave me an error that: cannot execute CREATE EXTENSION in a read-only transaction.
All of those actions I have tried to do with the highest privilege user that I have (using a user who is a member of cloudsqlsuperuser, basically same as the default postgres user)
So I have 2 questions:
How do I fix the privileges issue so I can see the statistics in the table?
How do I enable extension on the read replica?
Thanks!
After having run some more tests on postgres 9.6, I have also obtained the messages <insufficient privilege>.
I have run the following query on both postgres 9.6 and 13 and obtained different results:
SELECT userid, usename, query
FROM pg_stat_statements
INNER JOIN pg_catalog.pg_user
ON userid = usesysid;
I noticed in postgres 9.6 that the queries I cannot see come from the roles/users cloudsqlagent and cloudsqladmin(preconfigured Cloud SQL postgres roles).
This does not happen with postgres 13 or better said versions 10 and higher and it is because when using EXTENSION pg_stat_statements, SQL statements from all users are visible to users with the cloudsqlsuperuser. This is the behavior of the product across different versions and it is described in the blue box of this link.
Basically only in version 9.6 the SQL statements from all users are NOT visible to users with the cloudsqlsuperuser role.
So if I enable it on the master, it should be enabled on the replica
as well?
Yes, after enabling the extension in the master you can connect to the replica and check with the following command that pg_stat_statements has been enabled:
SELECT * FROM pg_extension;
If you would like a more uniform behavior across postgres versions or if you strongly need the SQL statements from all users to be visible to the cloudsqlsuperuser role, I would recommend then creating a public issue tracker with the template feature request.
I hope you find this useful.
On the permissions side of things, cloudsqlsuperuser is not a real superuser (but is as close as you'll get in GCP cloudsql). Due to this I've sometimes found that I've needed to explicitly grant it access to objects / roles to be able to access things.
Therefore I'd try doing:
GRANT pg_read_all_stats TO cloudsqlsuperuser;
I'm not too sure about how to enable on the read replica unfortunately.
However, you might be interested in the recently released insights feature https://cloud.google.com/sql/docs/postgres/insights-overview - I haven't been able to play with this properly yet, but from what I've seen it's pretty nifty.
Is there any way how to get all database names out of a postgres database using JDBC? I can get the current one, but thats not what I am looking for...
I have a jUnit rule, which creates database for each test and after the test it drops it, but in some special cases, when the JVM dies, the drop never happens. So I'd like to check in the rule also existing database and clean some, which are not used any more. What I am looking for is some \l metacommand (but I can't easily ssh to the machine from unit tests...)
What would be also a solution for me would be some database ttl, something like some amqp queues have, but I suppose thats not in postgres either...
Thanks
Just run:
select datname
from pg_database
through JDBC. It returns all databases on the server you are connected to.
If you know how to get the information you want through a psql meta command (e.g. \l) just run psql with the -E switch - all internal SQL queries for the meta commands are then printed to the console.
-l actually uses a query that is a bit more complicated, but to only the the names, the above is sufficient
I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.
I am beginner to PostgreSQL.
I want to connect to another database from the query editor of Postgres - like the USE command of MySQL or MS SQL Server.
I found \c databasename by searching the Internet, but its runs only on psql. When I try it from the PostgreSQL query editor I get a syntax error.
I have to change the database by pgscripting. Does anyone know how to do it?
When you get a connection to PostgreSQL it is always to a particular database. To access a different database, you must get a new connection.
Using \c in psql closes the old connection and acquires a new one, using the specified database and/or credentials. You get a whole new back-end process and everything.
You must specify the database to use on connect; if you want to use psql for your script, you can use "\c name_database"
user_name=# CREATE DATABASE testdatabase;
user_name=# \c testdatabase
At this point you might see the following output
You are now connected to database "testdatabase" as user "user_name".
testdatabase=#
Notice how the prompt changes. Cheers, have just been hustling looking for this too, too little information on postgreSQL compared to MySQL and the rest in my view.
In pgAdmin you can also use
SET search_path TO your_db_name;
The basic problem while migrating from MySQL I faced was, I thought of the term database to be same in PostgreSQL also, but it is not. So if we are going to switch the database from our application or pgAdmin, the result would not be as expected.
As in my case, we have separate schemas (Considering PostgreSQL terminology here.) for each customer and separate admin schema. So in application, I have to switch between schemas.
For this, we can use the SET search_path command. This does switch the current schema to the specified schema name for the current session.
example:
SET search_path = different_schema_name;
This changes the current_schema to the specified schema for the session. To change it permanently, we have to make changes in postgresql.conf file.
Use this commad when first connect to psql
=# psql <databaseName> <usernamePostgresql>
set search_path = 'schema name here'
while connecting to the postgres, you have to opt for default database to connect. If you have nothing, you can use 'postgres' as default.
You can use dbeaver to connect to postgres. UI is good
PgAdmin 4, GUI Tool: Switching between databases
In the PgAdmin Browser on the left hand side, right click on the database you are willing to switch to.
Select a QueryTool from the drop down menu (or any other option that you need, I will stick with the QueryTool for now).
You will see the QueryTool in the PgAdmin window, and on top you will see the active database and the role name.
Now you can write queries against the chosen database.
You can open multiple QueryTools for multiple database, and work with them as you do with your graphical text editor.
In order to be sure that you are querying the proper database, issue the following query:
SELECT session_user, current_database();