Debezium is not working with redis streams and postgresql - postgresql

I'm having an issue with running debezium with redis and postgresql.
My docker compose is:
version: "3.3"
services:
redis-stack:
image: redis/redis-stack:7.0.6-RC4
restart: unless-stopped
ports:
- 10001:6379
- 13333:8001
volumes:
- ./data/redis-stack/:/data
db:
image: postgres
restart: unless-stopped
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1234
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
ports:
- 5555:80
environment:
PGADMIN_DEFAULT_PASSWORD: 1234
PGADMIN_DEFAULT_EMAIL: arkan.m.gerges#gmail.com
debezium:
image: debezium/server:2.1.2.Final
restart: unless-stopped
ports:
- 8180:8080
volumes:
- ./config/debezium:/debezium/conf
- ./data/debezium:/debezium/data
depends_on:
- redis-stack
- db
networks:
app-network:
in the config/debezium/application.properties
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore
debezium.source.offset.flush.interval.ms=0
debezium.source.offset.storage.redis.address=redis-stack:6379
debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory
debezium.source.schema.history.internal.redis.address=redis-stack:6379
debezium.sink.type=redis
debezium.sink.redis.address=redis-stack:6379
debezium.source.database.hostname=db
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=1234
debezium.source.database.dbname=softwaredev_expert
debezium.source.database.server.name=db
debezium.source.schema.whitelist=public
debezium.source.schema.include.list=public
debezium.source.plugin.name=pgoutput
I can access redis insights, and access to postgresql, but I'm getting errors running debezium:
https://pastebin.com/9TGUNvKe

I've solved the issue with the following application.properties
Here I've used the outbox pattern and for each aggregate type, a new stream will be created, but all the events for that aggregate will be in the same stream.
# Debezium redis sink connector
debezium.sink.type=redis
debezium.sink.redis.address=redis-stack:6379
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore
debezium.source.offset.flush.interval.ms=0
debezium.source.offset.storage.redis.address=redis-stack:6379
debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory
debezium.source.schema.history.internal.redis.address=redis-stack:6379
# Source database connection info
debezium.source.database.hostname=db
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=1234
debezium.source.database.dbname=softwaredev_expert
debezium.source.table.include.list=public.outbox_events
debezium.source.plugin.name=pgoutput
debezium.source.topic.prefix=swdevexpert
# Outbox event router
debezium.transforms=outbox
debezium.transforms.outbox.type=io.debezium.transforms.outbox.EventRouter
debezium.transforms.outbox.route.topic.replacement=swdevexpert.events.$1
I filtered only the database table outbox_events
And my docker-compose is:
version: "3.3"
services:
redis-stack:
image: redis/redis-stack:7.0.6-RC4
restart: unless-stopped
ports:
- "10001:6379"
- "13333:8001"
volumes:
- ./data/redis-stack/:/data
db:
image: postgres
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1234
command:
- "-c"
- "config_file=/etc/postgresql/postgresql.conf"
volumes:
- ./data/postgresql:/var/lib/postgresql/data
- ./config/postgresql/postgresql.conf:/etc/postgresql/postgresql.conf
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_PASSWORD: 1234
PGADMIN_DEFAULT_EMAIL: arkan.m.gerges#gmail.com
debezium:
image: debezium/server:2.1.2.Final
restart: unless-stopped
volumes:
- ./config/debezium:/debezium/conf
- ./data/debezium:/debezium/data
depends_on:
- redis-stack
- db
networks:
app-network:
And in config/postgresql/postgresql.conf
listen_addresses = '*'
port = 5432
max_connections = 20
shared_buffers = 128MB
temp_buffers = 8MB
work_mem = 4MB
wal_level = logical
max_wal_senders = 3
max_replication_slots = 100
Some of the sql inserts, I did not put it in one line to experiment with different timestamp:
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order', '111', 'my_type', '{"order_id": "111", "order_type": "car"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order2', '222', 'my_type', '{"order_id": "222", "order_type": "house"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890', '333', 'my_type', '{"order_id": "333", "order_type": "computer"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890', '444', 'my_type', '{"order_id": "444", "order_type": "computer"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890_one_two', '555', 'my_type', '{"order_id": "555", "order_type": "computer"}');
And on redis insight:

Related

Escaping colons in Docker Compose value

I'm trying to pass some parameters to my healthcheck test :
version: '3.8'
services:
mongodb:
image: mongo
container_name: mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGODB_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGODB_PASS}
volumes:
- ./db:/data/db
networks:
- proxy
restart: unless-stopped
healthcheck:
test: test $$(echo "rs.initiate({_id: 'rs0', members: [{_id: 1, 'host': 'mongodb:27017'}]}).ok || rs.status().ok" | mongosh -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
I'm getting this error :
yaml: mapping values are not allowed in this context
If I remove the colons (:) it works. How can I escape these colons in my test value ?
I was able to find the solution. Colons need to be escaped with quotes, and I used double quotes elsewhere for clarity :
test: test $$(echo 'rs.initiate({_id':' "rs0", members':' [{_id':' 1, "host"':' "mongodb':'27017"}]}) || rs.status().ok' | mongosh -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1

How to convert UTF8 data from PostgreSQL to AL32UTF8 Oracle DB?

I have a task to import some data from Postgres database to Oracle via dblink
The connection between Postgres and Oracle works good, but unfortunately, when I try read data from created view (in Oracle database), I spotted a problem with data encoding - special national characters (Polish).
Source Postgres database have a UTF8 encoding, but Oracle have a AL32UTF8
Postgres:
select server_encoding
-
UTF8
Oracle:
select * from v$nls_parameters where parameter like '%CHARACTERSET';
-
PARAMETER VALUE
NLS_CHARACTERSET AL32UTF8
NLS_NCHAR_CHARACTERSET AL16UTF16
When I use command "isql -v" (on destination machine with Oracle database) and later "select * from table;", everything works good, but when I use this same select from Oracle database using dblink my data encoding is broken
For example:
from odbc:
isql -v
select * from table;
[ID][Name]
0,Warszawa
1,Kraków
2,Gdańsk
from oracle using dblink:
select * from table#dblink;
[ID][Name]
0,Warszawa
1,KrakĂłw
2,Gdańsk
/etd/odbc.ini:
[ODBC Data Sources]
[Postgres_DB]
Description = Postgres_DB
Driver = /usr/lib64/psqlodbcw.so
DSN = Postgres_DB
Trace = Yes
TraceFile = /tmp/odbc_sql_postgresdb.log
Database = database
Servername = server
UserName = user
Password = secret
Port = 5432
Protocol = 8.4
ReadOnly = Yes
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
SSLmode = require
Charset = UTF8
$ORACLE_HOME/hs/admin/initPostgres_DB.ora:
HS_FDS_CONNECT_INFO = Postgres_DB
HS_FDS_TRACE_LEVEL=DEBUG
HS_FDS_SHAREABLE_NAME = /usr/lib64/libodbc.so
HS_FDS_SUPPORT_STATISTICS = FALSE
HS_LANGUAGE=AL32UTF8
set ODBCINI=/etc/odbc.ini
I have installed these packages:
postgresql-libs.x8664 - 8.4.20-8.el69
postgresql-odbc.x8664 - 08.04.0200-1.el6
unixODBC.x8664 - 2.2.14-14.el6
unixODBC-devel.x86_64 - 2.2.14-14.el6
Please help me.. I need to have the correct data in Oracle..
Thank you very much

How to configure DATADOG agent .../postgres.d/conf.yaml with multiple custom_queries (SELECT statements)

It works with one SELECT statement but did not with two.
custom_queries:
- metric_prefix: mars
query: SELECT substring(query, 1, 50) AS query, round(total_time::numeric, 2) AS total_time, calls, round(mean_time::numeric, 2) AS mean, round((100 * total_time / sum(total_time::numeric) OVER ())::numeric, 2) AS percentage_cpu FROM pg_stat_statements ORDER BY total_time DESC LIMIT 25
columns:
- name: query
type: tag
- name: total_time
type: gauge
- name: calls
type: gauge
- name: mean
type: gauge
- name: percentage_cpu
type: gauge
tags:
- query:cpu
- metric_prefix: venus
query: SELECT count(*) AS csum FROM pg_stat_statement
columns:
- name: csum
type: gauge
tags:
- query:sum
It passed the checks, datadig-agent status output is clear, but the datadog main menu Metric --> Explore does not show the csum metric.

Viewing a postgresql json table in adminer as a (sortable) table

Viewing a json table in adminer as a (sortable) table
I have a jsonb field in a database table, I'd like to be able to save it as a view in Adminer so that I can quickly search and sort the fields like with a standard database table.
I wonder if the json-column adminer plugin could help, but I can't work out how to use it.
I'm using the adminer docker image which I believe has plugins already built in.
If I have a database like this (somebody kindly put together this fiddle for a previous question)
CREATE TABLE prices( sub_table JSONB );
INSERT INTO prices VALUES
('{ "0": {"Name": "CompX", "Price": 10, "index": 1, "Date": "2020-01-09T00:00:00.000Z"},
"1": {"Name": "CompY", "Price": 20, "index": 1, "Date": "2020-01-09T00:00:00.000Z"},
"2": {"Name": "CompX", "Price": 19, "index": 2, "Date": "2020-01-10T00:00:00.000Z"}
}');
and to view a subset of the table
SELECT j.value
FROM prices p
CROSS JOIN jsonb_each(sub_table) AS j(e)
WHERE (j.value -> 'Name')::text = '"CompX"'
I'd like to see the following table in Adminer
| Date | Name | Price | index |
| ------------------------ | ----- | ----- | ----- |
| 2020-01-09T00:00:00.000Z | CompX | 10 | 1 |
| 2020-01-10T00:00:00.000Z | CompX | 19 | 2 |
| | | | |
as opposed to:
| value |
| ------------------------------------------------------------ |
| {"Date": "2020-01-09T00:00:00.000Z", "Name": "CompX", "Price": 10, "index": 1} |
| {"Date": "2020-01-09T00:00:00.000Z", "Name": "CompX", "Price": 10, "index": 1} |
EDIT - building on a-horse-with-no-name answer. The following saves a view with the appropriate columns, and can then be searched/sorted in Adminer in the same way as a standard table.
CREATE VIEW PriceSummary AS
select r.*
from prices p
cross join lateral jsonb_each(p.sub_table) as x(key, value)
cross join lateral jsonb_to_record(x.value) as r("Name" text, "Price" int, index int, "Date" date)
There is no automatic conversion available, but you could convert the JSON value to a record, to make the display easier:
select r.*
from prices p
cross join lateral jsonb_each(p.sub_table) as x(key, value)
cross join lateral jsonb_to_record(x.value) as r("Name" text, "Price" int, index int, "Date" date)
where r."Name" = 'CompX'
Online example
The json-column adminer plugin will do the job to some extent, in the sense that it will display that JSON values better.
Here is a minimal docker-compose.yml that creates a postgres conatiner and links an adminer to it. To have plugins installed in the adminer container, you can use the environment variable ADMINER_PLUGINS as shown below:
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
environment:
- ADMINER_PLUGINS=json-column tables-filter tinymce
ports:
- 9999:8080
Access the adminer web UI at localhost:9999. Use username: postgres, password: example to connect to the postgres database.
When you will edit a table that contains JSON columns, it will be displayed like this:

How to grab last two lines from ansible (register stdout) initialization of kubernetes cluster

This is the piece of my playbook file for the question:
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --config /etc/kubernetes/kubeadminit.yaml
register: init_output
- name: Copy join command to local file
local_action: copy content={{ init_output.stdout }} dest="./join-command"
Currently join-command contains the entire stdout (30+ lines of text) for content. What I want to grab is just the last two lines of init_output.stdout instead of the entire output. I've looked into using index reference (ie. init_output.stdout[#]) but I don't know that the output will always be the same length and I don't know how to use indexes to grab more than one line, but i'm fairly certain that the last two lines will always be the join command. Any suggestions?
Select last 2 lines from the list stdout_lines
- local_action: copy content={{ init_output.stdout_lines[-2:] }} dest="./join-command"
It's possible to format the lines in a block. For example
- local_action:
module: copy
content: |
{{ init_output.stdout_lines[-2] }}
{{ init_output.stdout_lines[-1] }}
dest: "./join-command"
To append the lines in a loop try
- local_action:
module: lineinfile
path: "./join-command"
line: "{{ item }}"
insertafter: EOF
create: true
loop: "{{ init_output.stdout_lines[-2:] }}"
I encountered this kind of issue and did not want to copy the join command to a local file so I did a set_fact instead this way:
- set_fact:
join_cmd: '{{ init_output.stdout_lines[-2][:-2] }}{{ init_output.stdout_lines[-1] }}'
I did this...
- name: kubeadm init
shell: |
kubeadm init --control-plane-endpoint \
localhost \
--control-plane-endpoint kube-api.local >> /tmp/run_kube_init.sh
when: master == "yes"
- name: Get join from master
fetch:
src: "/tmp/run_kube_init.sh"
dest: "/tmp/run_kube_init.sh"
flat: yes
when: ansible_hostname == 'k-master'
- name: Add join file to nodes
copy:
src: "/tmp/run_kube_init.sh"
dest: "/tmp/run_kube_init.sh"
when: master == "no"
- name: Extract join token for nodes
shell: tail -n +2 /tmp/run_kube_init.sh | head -n -1 | awk '{print $5}' | tail -n 1
register: JOIN_TOKEN
- set_fact:
join_token: "{{ JOIN_TOKEN.stdout }}"
- name: join nodes
shell: |
kubeadm join kube-api.local:6443 \
--token {{ JOIN_TOKEN.stdout }} \
--discovery-token-unsafe-skip-ca-verification
when: master == "no"
- name: rm /tmp/run_kube_init.sh
ansible.builtin.file:
path: /tmp/run_kube_init.sh
state: absent