Escaping colons in Docker Compose value - docker-compose

I'm trying to pass some parameters to my healthcheck test :
version: '3.8'
services:
mongodb:
image: mongo
container_name: mongodb
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGODB_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGODB_PASS}
volumes:
- ./db:/data/db
networks:
- proxy
restart: unless-stopped
healthcheck:
test: test $$(echo "rs.initiate({_id: 'rs0', members: [{_id: 1, 'host': 'mongodb:27017'}]}).ok || rs.status().ok" | mongosh -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
I'm getting this error :
yaml: mapping values are not allowed in this context
If I remove the colons (:) it works. How can I escape these colons in my test value ?

I was able to find the solution. Colons need to be escaped with quotes, and I used double quotes elsewhere for clarity :
test: test $$(echo 'rs.initiate({_id':' "rs0", members':' [{_id':' 1, "host"':' "mongodb':'27017"}]}) || rs.status().ok' | mongosh -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1

Related

Debezium is not working with redis streams and postgresql

I'm having an issue with running debezium with redis and postgresql.
My docker compose is:
version: "3.3"
services:
redis-stack:
image: redis/redis-stack:7.0.6-RC4
restart: unless-stopped
ports:
- 10001:6379
- 13333:8001
volumes:
- ./data/redis-stack/:/data
db:
image: postgres
restart: unless-stopped
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1234
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
ports:
- 5555:80
environment:
PGADMIN_DEFAULT_PASSWORD: 1234
PGADMIN_DEFAULT_EMAIL: arkan.m.gerges#gmail.com
debezium:
image: debezium/server:2.1.2.Final
restart: unless-stopped
ports:
- 8180:8080
volumes:
- ./config/debezium:/debezium/conf
- ./data/debezium:/debezium/data
depends_on:
- redis-stack
- db
networks:
app-network:
in the config/debezium/application.properties
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore
debezium.source.offset.flush.interval.ms=0
debezium.source.offset.storage.redis.address=redis-stack:6379
debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory
debezium.source.schema.history.internal.redis.address=redis-stack:6379
debezium.sink.type=redis
debezium.sink.redis.address=redis-stack:6379
debezium.source.database.hostname=db
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=1234
debezium.source.database.dbname=softwaredev_expert
debezium.source.database.server.name=db
debezium.source.schema.whitelist=public
debezium.source.schema.include.list=public
debezium.source.plugin.name=pgoutput
I can access redis insights, and access to postgresql, but I'm getting errors running debezium:
https://pastebin.com/9TGUNvKe
I've solved the issue with the following application.properties
Here I've used the outbox pattern and for each aggregate type, a new stream will be created, but all the events for that aggregate will be in the same stream.
# Debezium redis sink connector
debezium.sink.type=redis
debezium.sink.redis.address=redis-stack:6379
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage=io.debezium.storage.redis.offset.RedisOffsetBackingStore
debezium.source.offset.flush.interval.ms=0
debezium.source.offset.storage.redis.address=redis-stack:6379
debezium.source.schema.history.internal=io.debezium.storage.redis.history.RedisSchemaHistory
debezium.source.schema.history.internal.redis.address=redis-stack:6379
# Source database connection info
debezium.source.database.hostname=db
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=1234
debezium.source.database.dbname=softwaredev_expert
debezium.source.table.include.list=public.outbox_events
debezium.source.plugin.name=pgoutput
debezium.source.topic.prefix=swdevexpert
# Outbox event router
debezium.transforms=outbox
debezium.transforms.outbox.type=io.debezium.transforms.outbox.EventRouter
debezium.transforms.outbox.route.topic.replacement=swdevexpert.events.$1
I filtered only the database table outbox_events
And my docker-compose is:
version: "3.3"
services:
redis-stack:
image: redis/redis-stack:7.0.6-RC4
restart: unless-stopped
ports:
- "10001:6379"
- "13333:8001"
volumes:
- ./data/redis-stack/:/data
db:
image: postgres
restart: unless-stopped
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 1234
command:
- "-c"
- "config_file=/etc/postgresql/postgresql.conf"
volumes:
- ./data/postgresql:/var/lib/postgresql/data
- ./config/postgresql/postgresql.conf:/etc/postgresql/postgresql.conf
pgadmin:
image: dpage/pgadmin4
restart: unless-stopped
ports:
- "5555:80"
environment:
PGADMIN_DEFAULT_PASSWORD: 1234
PGADMIN_DEFAULT_EMAIL: arkan.m.gerges#gmail.com
debezium:
image: debezium/server:2.1.2.Final
restart: unless-stopped
volumes:
- ./config/debezium:/debezium/conf
- ./data/debezium:/debezium/data
depends_on:
- redis-stack
- db
networks:
app-network:
And in config/postgresql/postgresql.conf
listen_addresses = '*'
port = 5432
max_connections = 20
shared_buffers = 128MB
temp_buffers = 8MB
work_mem = 4MB
wal_level = logical
max_wal_senders = 3
max_replication_slots = 100
Some of the sql inserts, I did not put it in one line to experiment with different timestamp:
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order', '111', 'my_type', '{"order_id": "111", "order_type": "car"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order2', '222', 'my_type', '{"order_id": "222", "order_type": "house"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890', '333', 'my_type', '{"order_id": "333", "order_type": "computer"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890', '444', 'my_type', '{"order_id": "444", "order_type": "computer"}');
insert into outbox_events (id, aggregatetype, aggregateid, type, payload)
values (uuid_generate_v4(), 'order1234567890_one_two', '555', 'my_type', '{"order_id": "555", "order_type": "computer"}');
And on redis insight:

Extracting specific value from stderr_lines

This is my ansible script
- name: show1
debug:
msg: "{{response.stderr_lines}}"
Here is the output
msg:
- Using endpoint [https://us-central1-aiplatform.googleapis.com/]
- CustomJob [projects/123456/locations/us-central1/customJobs/112233445566] is submitted successfully.
- ''
- Your job is still active. You may view the status of your job with the command
- ''
- ' $ gcloud ai custom-jobs describe projects/123456/locations/us-central1/customJobs/112233445566'
- ''
- or continue streaming the logs with the command
- ''
- ' $ gcloud ai custom-jobs stream-logs projects/123456/locations/us-central1/customJobs/112233445566'
Here I want to extract custom Job ID which is 112233445566
I used the select module like below
- name: show
debug:
msg: "{{train_custom_image_unmanaged_response.stderr_lines | select('search', 'describe') | list }}"
and it gives me this output
msg:
- ' $ gcloud ai custom-jobs describe projects/123456/locations/us-central1/customJobs/112233445566'
But I just want the job id as specified above. Any idea about that ?
Thanks.
You selected the line you are interested in. From that now you want to isolate the job id number in the end. You can do that using a regular expression like so:
- set_fact:
line: "{{train_custom_image_unmanaged_response.stderr_lines | select('search', 'describe') | list }}"
- debug:
msg: "{{ line | regex_search('.*/customJobs/(\\d+)', '\\1') }}"
This will give you all the digits in the end of the line after /customJobs/. See https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#searching-strings-with-regular-expressions

values to be taken from script mongo

please assist how can we achieve this:
/home/bin/mongoexport --host pr01.prod.com:3046 --authenticationDatabase admin -u m7#pub.com -p 'orc1*&'
--readPreference secPref --db action --collection pubaction --type=csv -f "_id" --sort "{_id:-1}"
--noHeaderLine --limit 1 -o /opt/app/maxvalue_id_rt24.csv
2021-04-01T00:48:22.721-0400 exported 1 record
$ cat /opt/app/maxvalue_id_rt24.csv
ObjectId(60659ac)
$ cat lastvalue_id_rt24.csv
ObjectId(60654fe)
--- we are putting manually lastvalue in lastvalue_id_rt24.csv file for the first time run the automated script
OUR QUERY ---- db.getCollection('pubaction').find({"_id":{"$gt":ObjectId("60654fe") ,"$lte":ObjectId("60659ac")}})
My requirement here to cut the value inside () in maxvalue and lastvalue to the next script and pass value x=60654fe And y =60659ac
$/home/bin/mongoexport --host pr01.prod.com:3046
--authenticationDatabase admin -u m7#pub.com -p 'orc1*&'
--readPreference secPref --db action
--collection pubaction --type=csv -f "_id,eventId,eventName,timeStamp,recordPublishIndicator"
-q '{"_id":{"$gt":'$minvalue' ,"$lte":'$maxvalue'}}' -o /opt/app/act_id_rt24.csv
So inside script -q '{"_id":{"$gt":ObjectId("x"),"$lte":ObjectId("y")}}' --- we will need to hardcode ObjectId And pass the x and y values
ex: from above output
-q '{"_id":{"$gt":ObjectId("60654fe") ,"$lte":ObjectId("60659ac")}}'
in next run in below script we must change the logic, to put the old ID value in MINVALUE and NEW ID VALUE IN MAXVALUE
/home/bin/mongoexport --host pr01.prod.com:3046
--authenticationDatabase admin -u m7#pub.com -p 'orc1*&'
--readPreference secPref --db action
--collection pubaction --type=csv -f "_id" --sort "{_id:-1}"
--noHeaderLine --limit 1 -o /opt/app/maxvalue_id_rt24.csv
if [ $? -eq 0 ]
then
maxvalue=`cat maxvalue_id_rt24.csv`
echo $maxvalue
minvalue=`cat lastvalue_id_rt24.csv`
and after sqlldr:
sqlldr report/fru1p control=act_id.ctl ERRORS=500 log=x`.log direct=Y
we MUST change LOGIC below to put the old ID value in MINVALUE and NEW ID VALUE IN MAXVALUE
cat maxvalue_id_rt24.csv > lastvalue_id_rt24.csv

Ansible fact is undefined error in conditional

Below is the playbook
---
- name: stop agent process
shell: "ps -ef | grep -v grep | grep -w {{ MONGODB_AGENT_PROCESS }} | awk '{print $2}'"
register: running_agent_processes
- name: stop mongod process
shell: "ps -ef | grep -v grep | grep -w {{ MONGODB_SERVER_PROCESS }} | awk '{print $2}'"
register: running_mongod_processes
- name: combine processes
set_fact:
all_processes: "{{ running_agent_processes.stdout_lines + running_mongod_processes.stdout_lines }}"
- name: Kill all processes
shell: "kill {{ item }}"
with_items: "{{ all_processes }}"
when: ansible_facts[ansible_hostname] != primary
- wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ all_processes }}"
ignore_errors: yes
register: killed_processes
when: ansible_facts[ansible_hostname] != primary
- name: Force kill stuck processes
shell: "kill -9 {{ item }}"
with_items: "{{ killed_processes.results | select('failed') | map(attribute='item') | list }}"
when: ansible_facts[ansible_hostname] != primary
I have stored a fact called "primary" which stores the primary of a mongodb replica set in a previous step in the playbook.
I just want to compare the ansible_facts[ansible_hostname] with my primary fact. If they are not equal, I would like to kill processes.
The error I am getting is below:
fatal: [lpdkubpoc01d.phx.aexp.com]: FAILED! => {"msg": "The
conditional check 'ansible_facts[ansible_hostname] != primary' failed.
The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"} fatal: [lpdkubpoc01c.phx.aexp.com]: FAILED! =>
{"msg": "The conditional check 'ansible_facts[ansible_hostname] !=
primary' failed. The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"} fatal: [lpdkubpoc01e.phx.aexp.com]: FAILED! =>
{"msg": "The conditional check 'ansible_facts[ansible_hostname] !=
primary' failed. The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"}
Can someone help me with comparing an ansible_fact with a set_fact fact?
You can comparing use directly ansible facts without write ansible_facts before. Just use as when: ansible_hostname != primary

How to grab last two lines from ansible (register stdout) initialization of kubernetes cluster

This is the piece of my playbook file for the question:
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --config /etc/kubernetes/kubeadminit.yaml
register: init_output
- name: Copy join command to local file
local_action: copy content={{ init_output.stdout }} dest="./join-command"
Currently join-command contains the entire stdout (30+ lines of text) for content. What I want to grab is just the last two lines of init_output.stdout instead of the entire output. I've looked into using index reference (ie. init_output.stdout[#]) but I don't know that the output will always be the same length and I don't know how to use indexes to grab more than one line, but i'm fairly certain that the last two lines will always be the join command. Any suggestions?
Select last 2 lines from the list stdout_lines
- local_action: copy content={{ init_output.stdout_lines[-2:] }} dest="./join-command"
It's possible to format the lines in a block. For example
- local_action:
module: copy
content: |
{{ init_output.stdout_lines[-2] }}
{{ init_output.stdout_lines[-1] }}
dest: "./join-command"
To append the lines in a loop try
- local_action:
module: lineinfile
path: "./join-command"
line: "{{ item }}"
insertafter: EOF
create: true
loop: "{{ init_output.stdout_lines[-2:] }}"
I encountered this kind of issue and did not want to copy the join command to a local file so I did a set_fact instead this way:
- set_fact:
join_cmd: '{{ init_output.stdout_lines[-2][:-2] }}{{ init_output.stdout_lines[-1] }}'
I did this...
- name: kubeadm init
shell: |
kubeadm init --control-plane-endpoint \
localhost \
--control-plane-endpoint kube-api.local >> /tmp/run_kube_init.sh
when: master == "yes"
- name: Get join from master
fetch:
src: "/tmp/run_kube_init.sh"
dest: "/tmp/run_kube_init.sh"
flat: yes
when: ansible_hostname == 'k-master'
- name: Add join file to nodes
copy:
src: "/tmp/run_kube_init.sh"
dest: "/tmp/run_kube_init.sh"
when: master == "no"
- name: Extract join token for nodes
shell: tail -n +2 /tmp/run_kube_init.sh | head -n -1 | awk '{print $5}' | tail -n 1
register: JOIN_TOKEN
- set_fact:
join_token: "{{ JOIN_TOKEN.stdout }}"
- name: join nodes
shell: |
kubeadm join kube-api.local:6443 \
--token {{ JOIN_TOKEN.stdout }} \
--discovery-token-unsafe-skip-ca-verification
when: master == "no"
- name: rm /tmp/run_kube_init.sh
ansible.builtin.file:
path: /tmp/run_kube_init.sh
state: absent