how to set mongodb replica set using ansible - mongodb

Need to set up mongo dB replica set in 3 instances ,one can be primary and rest two will be secondary.
Anyone can suggest me about how can I write the playbook.
Have started mongo shell in three servers and initiate the replication name
'''replication:
replSetName: "testingrs"'''

Ansible provides already plugin for it: community.mongodb.mongodb_replicaset
When I deployed my MongoDB sharded cluster, the plugin was still version 1.0 and had many limitations. We also had some problems with installing pymongo, so I developed the tasks manually. However, I think with current version there is no need anymore to write the tasks by your own.
Anyway, my playbook looks like this:
- name: Check if Replicaset is already initialized
shell:
cmd: "/usr/bin/mongo --norc --quiet localhost:{{ ports.config }}"
executable: /bin/bash
stdin: "rs.status().codeName"
register: result
changed_when: false
check_mode: no
- set_fact:
rs_initiate: |
{% set members = [] %}
{% for host in groups['config'] | sort %}
{% set m = {'_id': loop.index0 } %}
{% set _ = m.update({'host': host + '.' + domain + ':' + ports.config | string }) %}
{% set _ = members.append(m) %}
{% endfor %}
{% set init = {'_id': replica_set.conf} %}
{% set _ = init.update({'members': members}) %}
{{ init }}
rs: |
{% set i = (result.stdout == 'NotYetInitialized') %}
{% for host in ansible_play_hosts %}
{% set i = i and (hostvars[host].result.stdout == 'NotYetInitialized') %}
{% endfor %}
{{ {'NotYetInitialized': i} }}
- name: Init Replicaset
shell:
cmd: "/usr/bin/mongo --norc --quiet localhost:{{ ports.config }}"
executable: /bin/bash
stdin: |
rs.initiate({{ rs_initiate | to_json }})
rs.status()
while (! db.isMaster().ismaster ) sleep(1000)
when: rs.NotYetInitialized and inventory_hostname_short == (groups['config'] | sort | first)
One issue I had was to deal with authentication, because when you deploy a MongoDB from scratch then no user exist. Thus when you like to run the playbook multiple times, you have to distinct with and without authentication.
My playbook contains these tasks:
- name: Check if authentication is enabled
shell:
cmd: "/usr/bin/mongo --norc --quiet localhost:{{ ports.router }}"
executable: /bin/bash
stdin: "rs.status().codeName"
register: result
ignore_errors: yes
changed_when: false
when: inventory_hostname_short == (groups['application'] | sort | first)
- name: Authenticate if needed
set_fact:
authenticate: "{{ (result.stdout == 'Unauthorized') | ternary('-u admin -p ' + password[env].admin + ' --authenticationDatabase admin','') }}"
when: inventory_hostname_short == (groups['application'] | sort | first)
- name: Create users
shell:
cmd: "/usr/bin/mongo {{ authenticate }} --norc --quiet localhost:{{ ports.router }}"
executable: /bin/bash
stdin: |
admin = db.getSiblingDB("admin")
admin.createUser({ user: "admin", pwd: "{{ password[env].admin }}", roles: ["root"] })
admin.auth("admin", "{{ password[env].admin }}")
// create more users if needed
admin.createUser(...)
when: inventory_hostname_short == (groups['application'] | sort | first)

Related

Ansible loop based on fact to Restart Kafka Connector Failed Tasks

Here I want to restart Kafka Connect tasks if they are in failed state using ansible-playbook, I have fetched connector tasks state using 'set_fact'
I want to create a loop over collected facts to restart Kafka Connector Tasks using connector name and task id.
tasks:
- name: Gethering Connector Names
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
status_code: 200
register: conn_stat
- name: Checking for Connector status
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ abc_conn_name }}/status"
user: "{{ username }}"
password: "{{ password }}"
method: GET
force_basic_auth: yes
loop: "{{ conn_name }}"
loop_control:
loop_var: abc_conn_name
vars:
conn_name: "{{ conn_stat.json }}"
register: conn_stat_1
- name: Gethering Failed task id
set_fact:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
- name: Failed connector name with tasks id
ansible.builtin.debug:
var: failed_connector_name_task_id
Getting below values from fact, which I need to push into loop
"failed_connector_name_task_id": [
{
"id": [
0
1
],
"name": "test-connector-sample"
},
{
"id": [
0
1
],
"name": "confluent-connect"
},
{
"id": [
0
1
2
],
"name": "confluent-test-1"
}
]
},
"changed": false
}
value need to be posted
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/**{{name of connector}}**/tasks/**{{task ID}}**/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
register: conn_stat
name of connector,
task ID want to use in loop.
In above I need to setup loop for tasks.
As we can see above connector 'confluent-test-1' have three tasks in failed state, so it need to be iterate three times with task 'confluent-test-1'.
This is a typical case where you want to use subelements, either through the aforementioned filter or lookup. Here is an example using the filter
- name: Restart Connector Failed tasks
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"
References worth to read:
ansible loops
subelements filter
You can actually remove your last unnecessary set_fact task with e.g. the following construct:
- name: Restart Connector Failed tasks
vars:
failed_connector_name_task_id: "{{ conn_stat_1 | json_query('results[].json[].{name: name ,id: [tasks[?state == `RUNNING`].id [] | [0] ]}') }}"
uri:
url: "{{scheme }}://{{ server }}:{{ port_no }}/connectors/{{ item.0.name }}/tasks/{{ item.1 }}/restart"
user: "{{ username }}"
password: "{{ password }}"
method: POST
force_basic_auth: yes
status_code: 200
loop: "{{ failed_connector_name_task_id | subelements('id', skip_missing=True) }}"

What syntax to use for parsing {{ from_dttm }} and {{ to_dttm }} Jinja variables as datetime objects in Custom SQL queries?

Question: How to correctly format {{ from_dttm }} and {{ to_dttm }} default Jinja variables so that they are parsed as datetime objects in Apache Superset Custom SQL metrics?
MWE: Say I want to show what is the time range covered by the data I use in my dashboards — what can be affected by the Time Range filter.
I use the public.birth_names demo dataset for the sake of the example.
So I create a BigNumber chart, with the following custom Metric:
age(
{% if from_dttm is not none %}
'{{ from_dttm }}'
{% else %}
min(ds)
{% endif %}
,
{% if to_dttm is not none %}
'{{ to_dttm }}'
{% else %}
max(ds)
{% endif %}
)
However, if I format the Jinja variables as:
{{ from_dttm }}, I get:
Error: syntax error at or near "{"
LINE 1: SELECT age({{ from_dttm }} , '{{ to_dttm }}') AS "age(
'{{ from_dttm }}', I get
Error: invalid input syntax for type timestamp with time zone: "{{ from_dttm }}"
LINE 1: SELECT age('{{ from_dttm }}'
"{{ from_dttm }}", I get
Error: column "{{ from_dttm }}" does not exist
LINE 1: SELECT age("{{ from_dttm }}" ,
I'm using Superset at 5ae7e5499 (Latest commit on Mar 25, 2022), with PostgreSQL as back-end db engine.
After offline discussion with #villebro, it turns out that:
'{{ from_dttm }}' is the valid syntax — one can try with a simpler example: MIN('{{ from_dttm }}'): this works, whereas both other syntaxes yield an error.
however, there is still a bug in current (version 1.4.*) versions, where {{ from_dttm }} is not rendered (what caused AGE() to yield an error in the particular example above). This issue has been raised in #19564, and a fix submitted in #19565.

How to write below condition in Helm chart syntax in a job.yaml file?

I have to write this condition in helm chart syntax in job.yaml file so that imagePullSecrets get's executed only when the condition is satisfied.
Condition is
when: (network.docker.username | default('', true) | trim != '') and (network.docker.password | default('', true) | trim != '')
To write above condition below this code:
imagePullSecrets:
- name: "{{ $.Values.image.pullSecret }}"
Ideally, Docker username & password should come from Secrets. Here's the Sample helm code to use if in yaml file:
imagePullSecrets:
{{ if and (ne $.Values.network.docker.password '') (ne $.Values.network.docker.username '') }}
- name: "{{ $.Values.image.pullSecret }}"
{{ end }}
And values.yaml should have:
network:
docker:
username: your-uname
password: your-pwd

How to get postgresql_query results from Ansible

I'm trying to print the output of PostgreSQL query that is run by Ansible. Unfortunately I'm not sure how to get ahold of the return value.
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
Googling just says to use register:, but the PostgreSQL ansible module does not have a register param:
fatal: [xx.xxx.xx.xx]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (postgresql_query) module: register Supported parameters include: ca_cert, db, login_host, login_password, login_unix_socket, login_user, named_args, path_to_script, port, positional_args, query, session_role, ssl_mode"}
The Ansible docs list return values for this module but there are no examples on how to use them, and everything I search for leads right back to register:.
Sounds like you are very close, but have register at the wrong indentation. It's a parameter of the task itself, not the postgresql module.
Try:
- name: Get specific tables
postgresql_query:
db: "{{ database_name }}"
login_host: "{{ my_host }}"
login_user: "{{ my_user }}"
login_password: "{{ my_password }}"
query: SELECT * FROM pg_tables t WHERE t.tableowner = current_user
register: result
- debug:
var: result

Ansible - 'unicode object' has no attribute 'file_input'

I'm working with Ansible 2.2.1.0 and I work on a old project made by someone else with errors.
I have the following variables in my code:
software_output:
- { file_input: 'Download_me.zip', file_output: 'download.zip' }
software_version:"0.5,0.6"
And I have this shell module instruction to download on a FTP:
- name: "MySoftware | get package on FTP"
shell: >
curl --ftp-ssl -k {{ ' --ssl-allow-beast ' if os == 'aix' else "" }} -# -f -u {{ ftp_user }}:{{ ftp_password }} -f "{{ ftp_url | replace('##software_version##',item[1]) }}{{ item[0].file_input }}"
-o {{ require_inst_dir }}/{{ item[0].file_output }} 2>/dev/null
with_nested:
- software_output
- "{{ software_version.split(',') }}"
when: software_version is defined
But it doesn't work at all, I have the following error:
'unicode object' has no attribute 'file_input'
It looks like with_nested is not used as it has to be used, did I missed something?
In:
with_nested:
- software_output
software_output is a string software_output.
To refer to the variable value, change to:
with_nested:
- "{{ software_output }}"
Long time ago the first syntax was valid, but it was long time ago.