pglogical logical plugin missing DML records - postgresql

Created pglogical plugin replication slot.
SELECT 'init' FROM pg_create_logical_replication_slot('demo_slot', 'pglogical_output');
Created table which have primary key.
create table pglogical_test(id serial primary key,name text);
Inserted few rows into table
insert into pglogical_test(name) values('hi');
Used below query to check the replication slot data
SELECT *
FROM pg_logical_slot_peek_changes('demo_slot', NULL, NULL,
'min_proto_version', '1', 'max_proto_version', '1',
'startup_params_format', '1', 'proto_format', 'json');
Slot is missing actions of I(insert),U(update) & D(delete). Only B(begin) & C(commit) is available.Sample slot data is as below:
{"action":"S", "params": {"max_proto_version":"1","min_proto_version":"1","coltypes":"f","pg_version_num":"120009","pg_version":"12.9 (Debian 12.9-1.pgdg110+1)","pg_catversion":"201909212","database_encoding":"UTF8","encoding":"SQL_ASCII","forward_changeset_origins":"t","walsender_pid":"884","pglogical_version":"2.4.1","pglogical_version_num":"20401","binary.internal_basetypes":"f","binary.binary_basetypes":"f","binary.basetypes_major_version":"1200","binary.sizeof_int":"4","binary.sizeof_long":"8","binary.sizeof_datum":"8","binary.maxalign":"8","binary.bigendian":"f","binary.float4_byval":"t","binary.float8_byval":"t","binary.integer_datetimes":"f","binary.binary_pg_version":"1200","no_txinfo":"f"}}
{"action":"B", "has_catalog_changes":"t", "xid":"529", "first_lsn":"0/189E960", "commit_time":"2023-02-01 09:14:20.965952+00"}
{"action":"C", "final_lsn":"0/18C2190", "end_lsn":"0/18C28E8"}
{"action":"B", "has_catalog_changes":"f", "xid":"530", "first_lsn":"0/18C28E8", "commit_time":"2023-02-01 09:14:29.654792+00"}
{"action":"C", "final_lsn":"0/18C2A30", "end_lsn":"0/18C2A60"}
Followed documentation from https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/internals-doc/OUTPUT.md
Why action I ,U & D are missing from pg_logical_slot_peek_changes ?

Related

How do I use the Class::DBI->sequence() method to fill 'id' field automatically in perl?

I'm following the example Class::DBI.
I create the cd table like that in my MariaDB database:
CREATE TABLE cd (
cdid INTEGER PRIMARY KEY,
artist INTEGER, # references 'artist'
title VARCHAR(255),
year CHAR(4)
);
The primary key cdid is not set to auto-incremental. I want to use a sequence in MariaDB. So, I configured the sequence:
mysql> CREATE SEQUENCE cd_seq START WITH 100 INCREMENT BY 10;
Query OK, 0 rows affected (0.01 sec)
mysql> SELECT NEXTVAL(cd_seq);
+-----------------+
| NEXTVAL(cd_seq) |
+-----------------+
| 100 |
+-----------------+
1 row in set (0.00 sec)
And set-up the Music::CD class to use it:
Music::CD->columns(Primary => qw/cdid/);
Music::CD->sequence('cd_seq');
Music::CD->columns(Others => qw/artist title year/);
After that, I try this inserts:
# NORMAL INSERT
my $cd = Music::CD->insert({
cdid => 4,
artist => 2,
title => 'October',
year => 1980,
});
# SEQUENCE INSERT
my $cd = Music::CD->insert({
artist => 2,
title => 'October',
year => 1980,
});
The "normal insert" succeed, but the "sequence insert" give me this error:
DBD::mysql::st execute failed: You have an error in your SQL syntax; check the manual that
corresponds to your MariaDB server version for the right syntax to use near ''cd_seq')' at line
1 [for Statement "SELECT NEXTVAL ('cd_seq')
"] at /usr/local/share/perl5/site_perl/DBIx/ContextualFetch.pm line 52.
I think the quotation marks ('') are provoking the error, because when I put the command "SELECT NEXTVAL (cd_seq)" (without quotations) in mysql client it works (see above). I proved all combinations (', ", `, no quotation), but still...
Any idea?
My versions: perl 5.30.3, 10.5.4-MariaDB
The documentation for sequence() says this:
If you are using a database with AUTO_INCREMENT (e.g. MySQL) then you do not need this, and any call to insert() without a primary key specified will fill this in automagically.
MariaDB is based on MySQL. Therefore you do not need the call to sequence(). Use the AUTO_INCREMENT keyword in your table definition instead.

How to use transaction with PQexecPrepared libpq

I'm new to postgresql and would like to ask how to do a transaction using BEGIN, COMMIT and PQexecPrepared. In my program, I have to update many tables before COMMIT. I do understand that we need to use:
1. PQexec(conn,"BEGIN");
2. execute some queries
3. PQexec(conn,"COMMIT");
I had first tried by using PQexecParams, it worked:
PQexec(conn,"BEGIN");
PQexecParams(conn, "INSERT INTO Cars (Id,Name, Price) VALUES ($1,$2,$3)",
3, NULL, parValues, NULL , NULL, 0 );
PQexec(conn,"COMMIT");
However when I had tried by using PQexecPrepared, my table Cars wasn't updated after COMMIT (of course it worked in autocommit mode without BEGIN &COMMIT )
PQexec(conn,"BEGIN");
PQprepare(conn,"teststmt", "INSERT INTO Cars (Id,Name, Price) VALUES ($1,$2,$3)", 3, NULL );
PQexecPrepared(conn, "teststmt", 3, parValues,NULL, NULL,0);
PQexec(conn,"COMMIT");
Do you have any advice in this case?

SELECT Error Cassandra 'Row' object has no attribute 'values'

I'm trying to setup and run Cassandra 3.10 in my local docker (https://hub.docker.com/_/cassandra/). Everything goes well until I try to select from one table.
This is the error I get everytime I run select whatever from whatever:
'Row' object has no attribute 'values'
The steps that I followed:
I created a new keyspace using the default superuser: cassandra. create keyspace test with replication = {'class':'SimpleStrategy','replication_factor' : 2}; and USE test;
I created a new table: create table usertable (userid int primary key, usergivenname varchar, userfamilyname varchar, userprofession varchar);
Insert some data: insert into usertable (userid, usergivenname, userfamilyname, userprofession) values (1, 'Oliver', 'Veits', 'Freelancer');
Try to select: select * from usertable where userid = 1;
I got this steps from: https://oliverveits.wordpress.com/2016/12/08/cassandra-hello-world-example/ just to copy & paste some working code (I was getting mad with the syntax and typos)
This are the logs of my docker image:
INFO [Native-Transport-Requests-1] 2017-04-23 19:09:12,543 MigrationManager.java:303 - Create new Keyspace: KeyspaceMetadata{name=test2, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=2}}, tables=[], views=[], functions=[], types=[]}
INFO [Native-Transport-Requests-1] 2017-04-23 19:09:41,415 MigrationManager.java:343 - Create new table: org.apache.cassandra.config.CFMetaData#1b484e82[cfId=6757f460-2858-11e7-9787-6d2c86545d91,ksName=test2,cfName=usertable,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={min_threshold=4, max_threshold=32}}, compression=org.apache.cassandra.schema.CompressionParams#4bce743f, extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [userfamilyname usergivenname userprofession]],partitionKeyColumns=[userid],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.Int32Type,columnMetadata=[usergivenname, userprofession, userid, userfamilyname],droppedColumns={},triggers=[],indexes=[]]
INFO [MigrationStage:1] 2017-04-23 19:09:41,484 ColumnFamilyStore.java:406 - Initializing test2.usertable
INFO [IndexSummaryManager:1] 2017-04-23 19:13:25,214 IndexSummaryRedistribution.java:75 - Redistributing index summaries
Thanks a lot!
UPDATE
I created another table with a uuid column like this: "uid uuid primary key". It works when the table is empty but after one insert, I get the same error
I had the same problem, I was using cqlsh. You just need to reload. Probably, it is a cqlsh bug the first time or after create a schema.
~$ sudo docker run --name ex1 -d cassandra:latest
9c1092938d29ec7f94bee773cc2adc0a23ff09344e32500bfeb1898466610d06
~$ sudo docker exec -ti ex1 /bin/bash
root#9c1092938d29:/# cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE SCHEMA simple1
... WITH replication={
... 'class': 'SimpleStrategy',
... 'replication_factor':1};
cqlsh> use simple1;
cqlsh:simple1> create table users(id varchar primary key, first_name varchar, last_name varchar);
cqlsh:simple1> select * from users;
id | first_name | last_name
----+------------+-----------
(0 rows)
cqlsh:simple1> insert into users(id, first_name, last_name) values ('U100001', 'James', 'Jones');
cqlsh:simple1> select * from users;
'Row' object has no attribute 'values'
cqlsh:simple1> quit
root#9c1092938d29:/# cqlsh
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> use simple1;
cqlsh:simple1> select * from users;
id | first_name | last_name
---------+------------+-----------
U100001 | James | Jones
(1 rows)

How to query all DBs in Azure ElasticDB Pool?

How do I perform SELECT on a table on all databases available in an ElasticDB Pool. All of them have same DB schema and they're created dynamically. I've explored Elastic Database Query but getting lost in the middle.
Reporting across scaled-out cloud databases
It asks to download a sample console application first, create a shard and then run the query which is a bit confusing. Is there anyway I can run T-SQL queries from SQL Server Management Studio to query all the databases.
PS: The DBs are not sharded. They're on DB per customer.
Thanks in Advance!
I'm thinking you need to add the databases as external sources so you can do a cross database query, you will be able to query the tables as if they were local.
I found a guide that can help you set it up:
Link to guide:
https://www.mssqltips.com/sqlservertip/4550/sql-azure-cross-database-querying/
the guide:
DB1 has Db1Table table:
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
CREATE TABLE DB1.dbo.Db1Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
CustomerName NVARCHAR(50));
INSERT INTO DB1.dbo.Db1Table(CustomerId, CustomerName) VALUES
( 1, 'aaaaaaa' ),
( 2, 'bbbbbbb' ),
( 3, 'ccccccc' ),
( 4, 'ddddddd' ),
( 5, 'eeeeeee' );
DB2 has Db2Table table:
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
CREATE TABLE DB2.dbo.Db2Table (
ID int IDENTITY(1, 1) NOT NULL PRIMARY KEY,
CustomerId INT,
Country NVARCHAR(50));
INSERT INTO DB2.dbo.Db2Table(CustomerId, Country) VALUES
( 1, 'United States' ),
( 3, 'Greece' ),
( 4, 'France' ),
( 5, 'Germany' ),
( 6, 'Ireland' );
If we want to fetch customers whose country is Greece then we could do the following query:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB2.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
but instead of returning customerId 3 we get the following error:
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
Reference to database and/or server name in 'DB2.dbo.Db2Table' is not supported in this version of SQL Server.
In order to be able to perform a cross database query we need to perform the following steps:
Step1: Create Master Key
The database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. More info here.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<password>';
-- Example --
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '3wbASg68un#q'
Step 2: Create Database Scoped Credential “my_credential”
A database credential is not mapped to a server login or database user. The credential is used by the database to access the external location anytime the database is performing an operation that requires access.
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
CREATE DATABASE SCOPED CREDENTIAL <credential_name>
WITH IDENTITY = '<user>',
SECRET = '<secret>';
-- Example --
CREATE DATABASE SCOPED CREDENTIAL my_credential
WITH IDENTITY = 'dbuser',
SECRET = '9Pfwbg68un#q';
credential_name
Specifies the name of the database scoped credential being created. credential_name cannot start with the number (#) sign. System credentials start with ##.
IDENTITY =’identity_name‘
Specifies the name of the account to be used when connecting outside the server.
SECRET =’secret‘
Specifies the secret required for outgoing authentication.
Step 3: Create External Data Source “my_datasource” of type RDBMS
This instruction creates an external data source for use in Elastic Database queries. For RDBMS, it specifies the logical server name of the remote database in Azure SQL Database.
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
-- (only on Azure SQL Database v12 or later)
CREATE EXTERNAL DATA SOURCE <data_source_name>
WITH (
TYPE=RDBMS,
LOCATION='<server_name>.database.secure.windows.net',
DATABASE_NAME='<remote_database_name>',
CREDENTIAL = <sql_credential>);
-- Example --
CREATE EXTERNAL DATA SOURCE my_datasource
WITH (
TYPE=RDBMS,
LOCATION='ppolsql.database.secure.windows.net',
DATABASE_NAME='DB2',
CREDENTIAL = my_credential);
data_source_name
Specifies the user-defined name for the data source. The name must be unique within the database in Azure SQL Database.
TYPE = [ HADOOP | SHARD_MAP_MANAGER | RDBMS ]
Use RDBMS with external data sources for cross-database queries with Elastic Database query on Azure SQL Database.
LOCATION =
specifies the logical server name of the remote database in Azure SQL Database.
DATABASE_NAME = ‘remote_database_name’
The name of the remote database (for RDBMS).
CREDENTIAL = credential_name
Specifies a database-scoped credential for authenticating to the external data source.
Step 4: Create External Table “mytable”
This instruction creates an external table for Elastic Database query.
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name
( <column_definition> [ ,...n ] )
WITH (
DATA_SOURCE = <data_source_name>
);
-- Example --
CREATE EXTERNAL TABLE [dbo].[Db2Table] (
[ID] int NOT NULL,
[CustomerId] INT,
[Country] NVARCHAR(50)
) WITH ( DATA_SOURCE = my_datasource )
database_name . [ schema_name ] . | schema_name. ] table_name
The one to three-part name of the table to create. For an external table, only the table metadata is stored in SQL along with basic statistics about the file and/or folder referenced in Hadoop or Azure blob storage. No actual data is moved or stored in SQL Server.
[ ,…n ]
The column definitions, including the data types and number of columns, must match the data in the external files.
DATA_SOURCE = external_data_source_name
Specifies the name of the external data source that contains the location of the external data.
After running the DDL statements, you can access the remote table Db2Table as though it were a local table.
So, now if we want to fetch customers whose country is Greece the query would be executed successfully:
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc
SELECT
db1.CustomerId,
db1.CustomerName
FROM DB1.dbo.Db1Table db1
INNER JOIN DB1.dbo.Db2Table db2 ON db1.CustomerId = db2.CustomerId
WHERE db2.Country = 'Greece';
-- Result --
CustomerId | CustomerName
-------------------------
3 ccccccc

Postgresql deadlock issue

I have received the following deadlock error in pg_log:
2016-01-15 09:52:48.648 EST,"name","name",11694,"ip:40273",56988e35.2dae,1,"UPDATE",2016-01-15 01:14:13 EST,10/3886,49775,ERROR,40P01,
"deadlock detected",
"Process 11694 waits for ShareLock on transaction 49774; blocked by process 11685.
Process 11685 waits for ShareLock on transaction 49775; blocked by process 11694.
Process 11694: update bb_batter_season_stat set a_field=a_field+$1, ab_int=ab_int+$2, abvsl=abvsl+$3, abvsr=abvsr+$4, bbvsl=bbvsl+$5, bbvsr=bbvsr+$6, cs=cs+$7,
cs_field=cs_field+$8, ... where bb_players_id=$53
Process 11685: update bb_batter_season_stat set a_field=a_field+$1, ab_int=ab_int+$2, abvsl=abvsl+$3, abvsr=abvsr+$4, bbvsl=bbvsl+$5, bbvsr=bbvsr+$6, cs=cs+$7,
cs_field=cs_field+$8, ... where bb_players_id=$53","See server log for query details.",,,,
"update bb_batter_season_stat set a_field=a_field+$1, ab_int=ab_int+$2, abvsl=abvsl+$3,
abvsr=abvsr+$4, bbvsl=bbvsl+$5, bbvsr=bbvsr+$6, cs=cs+$7, cs_field=cs_field+$8, ... where bb_players_id=$53",,,""
I can't understand why is it happening. Two processes running same query and deadlock happens.
The table schema is:
CREATE TABLE bb_batter_season_stat (
id SERIAL NOT NULL ,
bb_players_id INTEGER NOT NULL ,
G SMALLINT ,
ABvsL SMALLINT ,
ABvsR SMALLINT ,
RvsL SMALLINT ,
RvsR SMALLINT ,
HvsL SMALLINT ,
HvsR SMALLINT ,
d2BvsL SMALLINT ,
d2BvsR SMALLINT ,
...
PRIMARY KEY(id) ,
FOREIGN KEY(bb_players_id) REFERENCES bb_players(id) );
CREATE INDEX bb_batter_season_stat_FKIndex1 ON bb_batter_season_stat (bb_players_id);
I suppose that you have at least 2 records in bb_batter_season_stat that have the same bb_players_id.
Transaction one locked for update the first record.
Transaction two locked the second.
Transaction one tried to lock the the second, but was blocked waiting for transaction two.
Transaction two tried to lock the first record, but was blocked waiting for transaction one, creating a deadlock.
To avoid this you should force locking of the records in the order of primary key. For example using select for update:
with ids as (
select id
from bb_batter_season_stat
where bb_players_id=?
order by id
for update
)
update bb_batter_season_stat
set a_field=a_field+$1, ab_int=ab_int+$2, …
where id in (select id from ids);