Force Ansible to gather facts from group - postgresql

I want to force Ansible to gather facts about hosts inside playbook (to use those data inside role) regardless --limit, but don't know how.
I have playbook like this:
- hosts: postgres_access
tasks:
- name: Gathering info
action: setup
- hosts: postgres
roles:
- postgres
Inside 'postgres' role I have template which iterates over default IPs:
{% for host in groups['postgres_access'] %}
host all all {{hostvars[host].ansible_default_ipv4.address}}/32 md5
{% endfor %}
This works like magic, but only if I run my playbook without --limit. If I use --limit it breaks, because some hosts in hostgroup have no gathered facts.
ansible-playbook -i testing db.yml --limit postgres
failed: [pgtest] (item=pg_hba.conf) => {"failed": true, "item": "pg_hba.conf", "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'ansible_default_ipv4'"}
How can I have --limit to reconfigure only postgres host, and have network data from other hosts (without doing all other configuration stuff?).

Try this please !
- hosts: postgres
pre_tasks:
- setup:
delegate_to: "{{item}}"
with_items: "{{groups['postgres_access']}}"
roles:
- postgres

You can run setup for the hosts in the postgres_access group as a task and save the facts using register:
- name: setup hosts
action: "setup {{ item }} filter=ansible_default_ipv4"
with_items: groups.postgres_access
register: ip_v4
- name: create template
template: src=[your template] dest=[your dest file]
Just keep in mind that the template needs to change how you are referencing the hosts ipv4 address, I tried with something like this:
{% for item in ip_v4.results %}
host all all {{ item.ansible_facts.ansible_default_ipv4.address }}/32 md5
{% endfor %}
For printing just the IP of each host in the group

Try this:
- hosts: postgres
pre_tasks:
- setup:
delegate_to: postgres_access
roles:
- postgres

Use the same role which you have defined as it is with ignore_errors: true . So that for hosts which do not have gathered data will not fail.
And if you want data to be gathered for all the hosts in both groups postgres_access and postgres, then add gather_facts: true to get facts for postgres group and for postgres_access you already have a task written.
- hosts: postgres_access
tasks:
- name: Gathering info
action: setup
- hosts: postgres
gather_facts: true
roles:
- postgres
ignore_errors: true

Related

Vagrant - Ansible - Install PostgreSQL and Postgis [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 19 hours ago.
Improve this question
I'm trying to install PostgreSQL and Postgis with Ansible on a Vagrant VM.
But I'm reaching some issues to install and access to PostgreSQL (didn't reach the step of Postgis yet).
My Vagrant VM is an ubuntu/jammy64.
Firstly, I installed PHP on the VM.
Then I try to install PostrgreSQL. In following, my psql task to Ansible:
---
- name: Install
apt:
update_cache: true
name:
- bash
- openssl
- libssl-dev
- libssl-doc
- postgresql
- postgresql-contrib
- libpq-dev
- python3-psycopg2
state: present
- name: Check if initialized
stat:
path: "{{ postgresql_data_dir }}/pg_hba.conf"
register: postgres_data
- name: Empty data dir
file:
path: "{{ postgresql_data_dir }}"
state: absent
when: not postgres_data.stat.exists
- name: Initialize
shell: "{{ postgresql_bin_path }}/initdb -D {{ postgresql_data_dir }}"
become: true
become_user: postgres
when: not postgres_data.stat.exists
- name: Start and enable service
service:
name: postgresql
state: started
enabled: true
- name: Update pg_ident.conf - allow user to auth with postgres
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_ident.conf"
insertafter: "# MAPNAME SYSTEM-USERNAME PG-USERNAME"
line: "user_{{ user }} {{ user }} postgres"
- name: Update pg_hba.conf - disable peer for postgres user
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all postgres peer"
line: "#local all postgres peer"
- name: Update pg_hba.conf - trust all connection
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all all peer"
line: "local all all trust"
- name: Restart
service:
name: postgresql
state: restarted
enabled: true
- name: "Create database {{ postgresql_db }}"
become: true
become_user: "{{ postgresql_user }}"
postgresql_db:
name: "{{ postgresql_db }}"
state: present
- name: "Create user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_user:
name: "{{ user }}"
password: "{{ user }}"
state: present
- name: "Grant user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_privs:
type: database
database: "{{ postgresql_db }}"
roles: "{{ user }}"
grant_option: no
privs: all
notify: psql restart
My vars:
---
postgresql_version: 14
postgresql_bin_path: "/usr/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_host: localhost
postgresql_port: 5432
postgresql_db: "db_{{ user }}"
postgresql_user: "{{ user }}"
postgresql_password: "{{ user }}"
ansible_ssh_pipelining: true
But when I play the Ansible's playbook I'm getting the following feedback:
TASK [include_role : psql] *****************************************************
TASK [psql : Install] **********************************************************
ok: [192.168.50.50]
TASK [psql : Check if initialized] *********************************************
ok: [192.168.50.50]
TASK [psql : Empty data dir] ***************************************************
skipping: [192.168.50.50]
TASK [psql : Initialize] *******************************************************
skipping: [192.168.50.50]
TASK [psql : Start and enable service] *****************************************
ok: [192.168.50.50]
TASK [psql : Create database db_ojirai] ****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Is the server running locally and accepting connections on that socket?
fatal: [192.168.50.50]: FAILED! => {"changed": false, "msg": "unable to connect to database: connection to server on socket \"/var/run/postgresql/.s.PGSQL.5432\" failed: Connection refused\n\tIs the server running locally and accepting connections on that socket?\n"}
PLAY RECAP *********************************************************************
192.168.50.50 : ok=14 changed=0 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0
Can you, guys, explain to me where is my mistake, please? Is it my PostgreSQL installation which is wrong?
Thanks for your feedbacks!
Edit:
I try the suggested solution by β.εηοιτ.βε but the message persist. I tried with following process:
vagrant destroy > export vars (suggested in the post) > vagrant up > ansible deploy
export vars (suggested in the post) > vagrant reload > ansible deploy
export vars (suggested in the post) > vagrant destroy > vagrant up > ansible deploy
vagrant destroy > vagrant up > export vars (suggested in the post) > ansible deploy

How to use Ansible to label the nodes of K8S

At present, I can only get the host name of the host node by going to the corresponding node to store the file locally, and then I send the stored file to the master node and set the file as a registered variable to set the corresponding node label for k8s, but for more The configuration of each node is difficult. Is there any good way to achieve it?
gather_facts: yes
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Ansible gets the current hostname
debug: var=hostvars[inventory_hostname]['ansible_hostname']
register: result
- local_action:
module: copy
content: "{{ result }}"
dest: /tmp/Current_hostname.yml
- debug: msg="{{ item }}"
with_lines: cat /tmp/Current_hostname.yml |tr -s ' '|cut -d '"' -f6 |sort -o /tmp/Modified_hostname
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Transfer file from ServerA to ServerB
synchronize:
src: /tmp/Modified_hostname
dest: /tmp/Modified_hostname
mode: pull
delegate_to: "{{ item }}"
with_items: "{{ groups['mysql'][0] }}"
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
tags: gets
become: yes
vars:
k8s_mariadb: "{{ lookup('file', '/tmp/Modified_hostname') }}"
tasks:
- name: Gets the node host information
debug: msg="the value of is {{ k8s_mariadb }}"
- name: Tag MySQL fixed node
shell: kubectl label node {{ k8s_mariadb }} harbor-master1=mariadb
ignore_errors: yes

Having issue with delegate_to in playbook calling a role to only run on one host in a list

I have a playbook and only want to run this play on the first master node. I tried moving the list into the role but did not see to work. Thanks for your help!
## master node only changes
- name: Deploy change kubernetes Master
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
delegate_to: "{{ groups['masters'][0] }}"
ERROR! 'delegate_to' is not a valid attribute for a Play
The error appears to be in '/mnt/win/kubernetes.playbook/deploy-kubernetes.yml': line 11, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
master node only changes
name: Deploy change kubernetes Master
^ here
In one playbook, create a new group with this host in the first play and use it in the second play. For example,
shell> cat playbook.yml
- name: Create group with masters.0
host: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ groups.masters.0 }}"
groups: k8s_master_0
- name: Deploy change kubernetes Master
hosts: k8s_master_0
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
(not tested)
Fix the role name
If files_location is a variable which shall be used in the role's scope put it into the vars. For example
roles:
- role: gd.kubernetes.master.role
vars:
files_location: ../files

How to identify the hosts in my playbook from a variable file?

on my hosts file, I have like 10 different groups that each has devices in it. each customer deployment, should go to a specific region and I want to specify that in a customer config file.
In my playbook, I tried to use a variable in front of hosts and my plan was to specify the hosts group in the config file.
master_playbook.yml
hosts: "{{ target_region }}"
vars:
custom_config_file: "./app_deployment/customer_config_files/xx_app_prod.yml"
xx_app_prod.yml
customer: test1
env: prod
app_port: 25073
target_region: dev
Error message I get:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'target_region' is undefined
To determine a HOST(who is not the running host) in which groups he is in u have to use a little helper:
Create a script:
#!/usr/bin/env ansible-playbook
#call like: ./showgroups -i develop -l jessie.fritz.box
- hosts: all
gather_facts: no
tasks:
- name: show the groups the host(s) are in
debug:
msg: "{{group_names}}"
After that u can run a Playbook Like:
- name: "get group memberships of host"
shell: "{{ role_path }}/files/scripts/show_groups -i {{ fullinventorypath }} -l {{ hostname }}"
register: groups
- name: "create empty list of group memberships"
set_fact:
memberships: []
- name: "fill list"
set_fact:
memberships: "{{ memberships + item }}"
with_items: groups.stdout.lines

How to make this ansible chkconfig task idempotent ?

I have a ansible task like this in my playbook to be run against a centos server:
- name: Enable services for automatic start
action: command /sbin/chkconfig {{ item }} on
with_items:
- nginx
- postgresql
This task changes every time I run it. How do I make this task pass the idempotency test ?
The best option is to use the enabled=yes with service module:
- name: Enable services for automatic start
service:
name: "{{ item }}"
enabled: yes
with_items:
- nginx
- postgresql
Hope that help you.