Issue with datanodes on postgres-XL cluster - postgresql

Postgres-XL not working as expected.
I have configured a Postgres-XL cluster as below:
GTM running on node3
GMT_Proxy running on node2 and node1
Co-ordinators and datanodes running on node2 and node1.
When I try to do any operation connecting to the database directly, I get the below error which is expected anyway.
postgres=# create table test(eno integer);
ERROR: cannot execute CREATE TABLE in a read-only transaction
But when I login via the co-ordinator, it says the below error:
postgres=# \l+
ERROR: Could not begin transaction on data node.
In the postresql.log, I can see the below errors. any idea what to be done?
2016-06-26 20:20:29.786 AEST,"postgres","postgres",3880,"192.168.87.130:45479",576fabb5.f28,1,"SET",2016-06-26 20:17:25 AEST,2/31,0,ERROR,22023,"node ""coord1_3878"" does not exist",,,,,,"SET global_session TO coord1_3878;SET parentPGXCPid TO 3878;",,,"pgxc"
2016-06-26 20:20:47.180 AEST,"postgres","postgres",3895,"192.168.87.131:45802",576fac7d.f37,1,"SELECT",2016-06-26 20:20:45 AEST,3/19,0,LOG,00000,"No nodes altered. Returning",,,,,,"SELECT pgxc_pool_reload();",,,"psql"
2016-06-26 20:21:12.147 AEST,"postgres","postgres",3897,"192.168.87.131:45807",576fac98.f39,1,"SET",2016-06-26 20:21:12 AEST,3/22,0,ERROR,22023,"node ""coord1_3741"" does not exist",,,,,,"SET global_session TO coord1_3741;SET parentPGXCPid TO 3741;",,,"pgxc"
PostresXL version - 9.5 r1.1
psql (PGXL 9.5r1.1, based on PG 9.5.3 (Postgres-XL 9.5r1.1))
Ant idea for this?

Seems like you haven't really configured pgxc_ctl well. Just type in
prepare config minimal
in pgxc_ctl command line which will generate you a general pgxc_ctl.conf file that you can modify accordingly.
And you can follow the official postgres XL documentation to add nodes from the pgxc_ctl command line as John H suggested.

I have managed to fix my issue:
1) Used the source from the git repository, XL9_5_STABLE branch ( https://git.postgresql.org/gitweb/?p=postgres-xl.git;a=summary). The source tarball they provide at http://www.postgres-xl.org/download/ did not work for me
2) Used pgxc_ctl as mentioned above. I was getting Could not obtain a transaction ID from GTM because of the fact that when adding the gtm I was using localhost instead of the ip.
add gtm master gtm localhost 20001 $dataDirRoot/gtm
instead of
add gtm master gtm 10.222.1.49 20001 $dataDirRoot/gtm

Related

Re-add lost Clickhouse replica in Zookeeper cluster

We previously had three Clickhouse nodes perfectly synced within Zookeeper until one of them was lost.
The Clickhouse node was rebuilt exactly as it was before (with Ansible) and the same create table command was run which resulted in the following error.
Command:
CREATE TABLE ontime_replica ( ... )
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/ontime_replica', '{replica}', FlightDate, (Year, FlightDate), 8192)
The error is:
Received exception from server:
Code: 253. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception: Replica /clickhouse/tables/01/ontime_replica/replicas/clickhouse1 already exists..
We're currently using Zookeeper version 3.4.10 and I would like to know if there's a way to remove the existing replica within Zookeeper, or simple let Zookeeper know that this is the new version of the existing replica.
Thank you in advance!
My approach to the solution was incorrect. Originally, I thought I needed to remove the replica within Zookeeper. Instead, the following commands within the Clickhouse server solve this problem.
Copy the SQL file from another, working node. The file is in /var/lib/clickhouse/metadata/default
chown clickhouse:clickhouse <database>.sql
chmod 0640 <database>.sql
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
service clickhouse-server start

Primary and standby server at different timelines in postgres

I am very new to postgres and being new I got stuck at a point and need some help, please pardon if you find it silly.
I am doing a pgpool HA and at postgres level i have streaming replication between 3 nodes of postgresql-9.5 - 1 master and 2 slaves
I was trying to configure auto failover but when i switched back to my original master, and restarted the postgres service, I am getting the following error:
slave 1-highest timeline 1 of the primary is behind recovery timeline 11
slave 2-highest timeline 1 of the primary is behind recovery timeline 10
slave 3-highest timeline 1 of the primary is behind recovery timeline 3
I tried deleting pg_xlog files in slaves and copying all the files from master pg_xlog into the slaves and then did a rsync.
i also did a pg_rewind but it says:
target server needs to use either data checksums or wal_log_hints = on
(I have wal_log_hints = on set in postgresql.conf already)
I've tried doing a pg_basebackup but since the data base server in slaves are still starting up its not able to connect to the server
Is there any way to bring the master and the slave at a same timeline?
In my case, it happened because ( experimentally ), I updated the standby database tables and again when I simulate the master-standby streaming replication I got the same errors.
So once again I cleaned the whole standby database directory and migrate the master database using cmd like
"pg_basebackup -P -R -X stream -c fast -h 10.10.40.105 -U postgres -D standby/"
I think something is wrong in your pgpool configuration. What tool you have been using for manement of replication and master-slave control? Is it post master or repmgr?
I was trying to configure pgpool with 3 data nodes using a tutorial from http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool and have done it correctly.
Also you can lean auto failover here.
(These question is obviously duplicate of this one, so I'll repeat the answer also.)
I'm not sure what you exactly mean by "when i switched back to my original master", but it looks that you are doing the wrongest possible thing in PostgreSQL streaming replication - introducing the second master.
The most important thing you should know about PostgreSQL replication is that once the failover is performed, you cannot simply "switch back to original master" - there's now a new master in cluster, and existence of two masters will make damage.
After a slave is promoted to master, the only way for you to re-join the old master is to:
Destroy it (delete the data directory);
Join it as a slave.
If you want it to be master again you'll continue with the following:
Let it run for awhile as a slave so that it can sync the data;
Kill temporary master and failover to old master;
Rejoin temporary master again as a slave.
You cannot simply switch master servers! Master can be created ONLY by failover (promoting a slave)
You should also know that whenever you are performing failover (whenever the master is changed), all slaves (except for the one that is promoted) need to be reconfigured to target the new master.
I suggest you reading this tutorial - it'll help.

About read-only transaction in PostgresXL

my PostgresXL version 9.2.0
(1)
After I started the GTM, DataNode, Coordinator, as well as the coordinator of each datanode postgresql.conf archive
#default_read_only_transaction = "off" modified default_read_only_transaction = "off"
(2)
Use the following instructions to connect information nodes:
psql -h 192.168.20.138 -p 25431 -U postgres
192.168.20.138 is the first data node ip location.
25431 is the first data node postgresql.conf set port settings.
postgres is installed PostgresXL system account, also a super manager of this PostgresXL.
(3)
Establish a database using the following command:
create database "MyTest";
The following error message has appeared:
can not execute CREATE DATABASE "MyTest" in a read-only transaction.
How can I be able to cancel the read-only transaction of this restriction?
THX.
You should connect to coordinator for creating database. If you connect to datanode, all operations are restricted to read-only.

ERROR: cannot execute CREATE TABLE in a read-only transaction

I'm trying to setup the pgexercises data in my local machine. When I run: psql -U <username> -f clubdata.sql -d postgres -x I get the error: psql:clubdata.sql:6: ERROR: cannot execute CREATE SCHEMA in a read-only transaction.
Why did it create a read-only database on my local machine? Can I change this?
Normally the most plausible reasons for this kind of error are :
trying create statements on a read-only replica (the entire instance is read-only).
<username> has default_transaction_read_only set to ON
the database has default_transaction_read_only set to ON
The script mentioned has in its first lines:
CREATE DATABASE exercises;
\c exercises
CREATE SCHEMA cd;
and you report that the error happens with CREATE SCHEMA at line 6, not before.
That means that the CREATE DATABASE does work, when run by <username>.
And it wouldn't work if any of the reasons above was directly applicable.
One possibility that would technically explain this would be that default_transaction_read_only would be ON in the postgresql.conf file, and set to OFF for the database postgres, the one that the invocation of psql connects to, through an ALTER DATABASE statement that supersedes the configuration file.
That would be why CREATE DATABASE works, but then as soon as it connects to a different database with \c, the default_transaction_read_only setting of the session would flip to ON.
But of course that would be a pretty weird and unusual configuration.
Reached out to pgexercises.com and they were able to help me.
I ran these commands(separately):
psql -U <username> -d postgres
begin;
set transaction read write;
alter database exercises set default_transaction_read_only = off;
commit;
\q
Then I dropped the database from the terminal dropdb exercises and ran script again psql -U <username> -f clubdata.sql -d postgres -x -q
I was having getting cannot execute CREATE TABLE in a read-only transaction, cannot execute DELETE TABLE in a read-only transaction and others.
They all followed a cannot execute INSERT in a read-only transaction. It was like the connection had switched itself over to read-only in the middle of my batch processing.
Turns out, I was running out of storage!
Write access was disabled when the database could no longer write anything. I am using Postgres on Azure. I don't know if the same effect would happen if I was on a dedicated server.
I had same issue for Postgre Update statement
SQL Error: 0, SQLState: 25006 ERROR: cannot execute UPDATE in a read-only transaction
Verified Database access by running below query and it will return either true or false
SELECT pg_is_in_recovery()
true -> Database has only Read Access
false -> Database has full Access
if returns true then check with DBA team for the full access and also try for ping in command prompt and ensure the connectivity.
ping <database hostname or dns>
Also verify if you have primary and standby node for the database
In my case I had a master and replication nodes, and the master node became replication node, which I believe switched it into hot_standby mode. So I was trying to write data into a node that was meant only for reading, therefore the "read-only" problem.
You can query the node in question with SELECT pg_is_in_recovery(), and if it returns True then it is "read-only", and I suppose you should switch to using whatever master node you have now.
I got this information from: https://serverfault.com/questions/630753/how-to-change-postgresql-database-from-read-only-to-writable.
So full credit and my thanks goes to Craig Ringer!
Dbeaver: In my case
This was on.
This doesn't quite answer the original question, but I received the same error and found this page, which ultimately led to a fix.
My issue was trying to run a function with temp tables being created and dropped. The function was created with SECURITY DEFINER privileges, and the user had access locally.
In a different environment, I received the cannot execute DROP TABLE in a read-only transaction error message. This environment was AWS Aurora, and by default, non-admin developers were given read-only privileges. Their server connections were thus set up to use the read-only node of Aurora (-ro- is in the connection url), which must put the connection in the read-only state. Running the same function with the same user against the write node worked.
Seems like a good use case for table variables like SQL Server has! Or, at least, AWS should modify their flow to allow temp tables to be created and dropped on read nodes.
This occurred when I was restoring a production database locally, the database is still doing online recovery from the WAL records.
A little bit unexpected as I assumed pgbackgrest was creating instantly recoverable restores, perhaps not.
91902 postgres 20 0 1445256 14804 13180 D 4.3 0.3 0:28.06 postgres: startup recovering 000000010000001E000000A5
If like me you are trying to create DB on heroku and are stuck as this message shows up on the dataclip tab
I did this,
Choose Resources from(Overview Resources Deploy Metrics Activity Access Settings)
Choose Settings out of (Overview, Durability, Settings, Dataclip)
Then in Administration->Database Credentials choose View Credentials...
then open terminal and fill that info here and enter
psql --host=***************.amazonaws.com --port=5432 --username=*********pubxl --password --dbname=*******lol
then it'll ask for password, copy-paste from there and you can run Postgres cmds.
I suddenly started facing this error on postgres installed on my windows machine, when I was running alter query from dbeaver, all I did was deleted the connection of postgres from dbeaver and created a new connection
If you are using Azure Database for PostgreSQL your server gets into read-only mode when the storage used is near total capacity.
The error you get is exactly:
ERROR: cannot execute XXXXXXXXX in a read-only transaction
https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-compute-storage
I just had this error. My cause was not granting permission to the SEQUENCE
GRANT ALL ON SEQUENCE word_mash_word_cube_template_description_reference_seq TO ronshome_user;
If you are facing this issue with an RDS instance cluster, please check your endpoint and use the Writer instance endpoint. Then it should work now.
Issue can be dur to Intellij config:
Go to Database view> click on Data Source Properties (Shift + enter)> (Select your data source)>
Options tab> Under Connection : uncheck Read-only
For me it was Azure PostgreSQL failing over to standby during maintaince in Azure and never failing back to master when PostgreSQL was in HA mode. You can check this event in Service Health and also check which zone you current VM is running from. If it's 2 and not 1 them most likely that's the result of events described above.

Getting error while creating node in postgres-xc cluster database

I have tried to create postgres-xc cluster database.So i followed their documentation do that (http://postgres-xc.sourceforge.net/docs/1_1/install-short.html)
After following that documentation procedure I'm not able to create node. I'm getting the following error:
ERROR: syntax error at or near "NODE"
when running following command
/usr/local/pgsql/bin/psql -c "CREATE NODE datanode1 WITH (TYPE = 'datanode', PORT = 15432)" postgres
Can anyone help me to do solve this.
I suspect a_horse_with_no_name is right. postgres-xc and postgresql are different codebases and you can't run CREATE NODE on PostgreSQL and have it work. You must be running Postgres-XC.
To find out, run SELECT version() from psql.
To install Postgres-XC the first step is to actually install the Postgres-XC software either from packages or from source (by compiling).