About read-only transaction in PostgresXL - postgresql

my PostgresXL version 9.2.0
(1)
After I started the GTM, DataNode, Coordinator, as well as the coordinator of each datanode postgresql.conf archive
#default_read_only_transaction = "off" modified default_read_only_transaction = "off"
(2)
Use the following instructions to connect information nodes:
psql -h 192.168.20.138 -p 25431 -U postgres
192.168.20.138 is the first data node ip location.
25431 is the first data node postgresql.conf set port settings.
postgres is installed PostgresXL system account, also a super manager of this PostgresXL.
(3)
Establish a database using the following command:
create database "MyTest";
The following error message has appeared:
can not execute CREATE DATABASE "MyTest" in a read-only transaction.
How can I be able to cancel the read-only transaction of this restriction?
THX.

You should connect to coordinator for creating database. If you connect to datanode, all operations are restricted to read-only.

Related

Repmgr and PostgreSQL data_directory permission problem

I've followed the instructions to set up a replica server of PostgreSQL with repmgr, But I can't start the PostgreSQL service because of a permission problem.
On the stand by server, I have this on my /etc/repmgr.conf file:
node_id=2
node_name=aws-replica
conninfo='host=<REDACTED> user=repmgr dbname=repmgr port=5432'
data_directory='/mnt/data/postgres/data'
log_file='/var/log/repmgr.log'
As you can see, I've changed the location of the data directory to /mnt/data/postgres/data and also updated the postgresql.conf file with the same information.
When I try to start the PostgreSQL service, I get this error on journalctl:
FATAL: data directory "/mnt/data/postgres/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
The folder in question is owned by a normal user, the same one that runs repmgr. If I set the ownership of the folder to postgres:postgres, then repmgr can't perform any operations because it can't access the folder. I tried joining the postgres group, but the service won't start unless the folder has 700 permissions, so it's pointless to join the group.
So, I'm either not being able to start postgresql or repmgr. What can I do to make it work?
The problem was that I created a postgres DB using innitdb, which wasn't necessary. I deleted the database and then ran repmgr as the postgres user, and it worked.

Unable to connect to remote DB in db2

Facing a weird issue in DB2. Unable to connect to remote DB.
Catalogued Successfully. But when tried to connect to DB alias getting a error as
"SQL30061N The database alias or database name "NDTEST " was not
found at the remote node."
OS :- Linux
DB2Level :-
DB21085I This instance or install (instance name, where applicable:
"db2inst1") uses "64" bits and DB2 code release "SQL10055" with level
identifier "0606010E".
Informational tokens are "DB2 v10.5.0.5", "s141128", "IP23633", and Fix Pack
"5".
Product is installed at "/path/to/db2".
But we did not mention anything as "NDTEST ".
Database alias = QAZWSXED
Database name = NEWDB(changedName)
Node name = BASENNEW
Database release level = 10.00
Comment =
Directory entry type = Remote
Authentication = SERVER_ENCRYPT
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Node name = BASENNEW
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = hostname
Service name = portNumber
db2 connect to QAZWSXED
SQL30061N The database alias or database name "NDTEST " was not
found at the remote node. SQLSTATE=08004
Error means exactly what is say - there is no NEWDB databsae on BASENNEW node.
The fact that you were able to catalog the database doesn't mean it is actually there. There is no connection attempt during the CATALOG DATABASE command (one is not prompted for password).
E.g. if I would create local TCP/IP loopback for my instance:
$ db2 catalog tcpip node loop remote localhost server 61115
I can with no issues catalog both existing (SAMPLE) and non-existing database (BADDB):
$ db2 catalog database sample as loopsamp at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database baddb as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
I will be able to connect to the first one:
Enter current password for kkuduk:
Database Connection Info
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPSAMP
but connection attempt to non-existing one will fail with SQL30061N
db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
Please verify the node directory on the remote server by runnnig
$ db2 list db directory
and see if you have an entry for your database which has type Indirect
Directory entry type = Indirect
Edit:
I didn't notice your edit that changed the database name. If error returns stalled database name then indeed db2 terminate is needed to create new CLP client application (db2bp).
E.g. if I would uncatalog incorrect entry and cataloged it again I will get similar error as client will use cached entry pointing to incorrect database name:
$ db2 uncatalog db LOOPBAD
DB20000I The UNCATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 catalog database sample as loopbad at node loop
DB20000I The CATALOG DATABASE command completed successfully.
DB21056W Directory changes may not be effective until the directory cache is
refreshed.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
SQL30061N The database alias or database name "BADDB " was not
found at the remote node. SQLSTATE=08004
db2 terminate terminates the Db2 CLP client back end and reads correctly new entry from the catalog:
$ db2 terminate
DB20000I The TERMINATE command completed successfully.
$ db2 connect to loopbad user kkuduk
Enter current password for kkuduk:
Database Connection Information
Database server = DB2/LINUXX8664 11.5.0.0
SQL authorization ID = KKUDUK
Local database alias = LOOPBAD
Found the problem - There was one entry in DCS(Database Connection Services). To check DCS details
db2 list dcs directory
above command provided a DCS entry with Target Database Name as NDTest .
Working fine after removing/un cataloguing the DCS entry.

Firebird 3 on macOS, local connection fails with: Can not access lock files directory /tmp/firebird/

I've installed firebird 3.0 from the package provided by firebirdsql.org.
If I try to use a local connection to a database:
isql employee -user SYSDBA
it fails with:
Can not access lock files directory /tmp/firebird/
So adding read/write/execute permissions to /tmp/firebird/
sudo chmod a+rwx /tmp/firebird/
and executing the command again yields:
Statement failed, SQLSTATE = 08001
I/O error during "open" operation for file "/tmp/firebird/fb_init"
-Error while trying to open file
-Unknown error: -1
This all will work if I sudo the calls, but is this really necessary?
What is the correct way to use a local connection to firebird database on macOS?
I found CORE-3871 issue in the firebird issue tracker, which describes the problem and it's solution. The user which tries to open the local connection must be member of the firebird user group.
So a user is added to the firebird group on mac bash with the following command:
sudo dseditgroup -o edit -a myusername -t user firebird
If you try to open the sample database employee, shipped with firebird, it's also necessary to grant the group write access to the employee.fdb:
sudo chmod g+w /Library/Frameworks/Firebird.framework/Resources/examples/empbuild/employee.fdb
Now /Library/Frameworks/Firebird.framework/Resources/bin/isql employee -user SYSDBA should work
I only put -p and the password and it's just fine. It's working.
You current command creates the Firebird Embedded database engine to connect to the database. To be able to do that, your current OS user needs to have sufficient access to the database file. For details how to fix that, see the answer by jonjonas68.
An alternative to solution - if you have the Firebird server running - is to connect through the Firebird server process, for example using isql localhost:employee -user sysdba -password <sysdbapassword>. Then the file permissions of the user running the Firebird server process will be applied. However, in that situation, you will need to specify a password when connecting, as passwordless authentication is only applied for Firebird Embedded connections.

PostgreSQL PITR

I have a master\server setup with pgpool and postgres 9.5. Both servers are running on centOS7.
I wanted to setup a point in time recovery with base backups every saturday, eliminating the old xlogs.
The server is archiving the xlogs with success on a external filesystem.
But when I try to execute the basebackup command it gives the following error:
pg_basebackup: could not connect to server: FATAL: database "replication" does not exist.
You seems to be missing the explicit HBA record for replication database, because specifying all does not covers the replication connections
host replication postgres 127.0.0.1/0 trust
The value replication specifies that the record matches if a
replication connection is requested (note that replication connections
do not specify any particular database). Otherwise, this is the name
of a specific PostgreSQL database.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.