I'm trying to add /usr/pgsql-10/bin to $PATH, since I want everybody who uses the machine, to be able to run the psql command.
Tried to follow this example:
- name: add {{extra_path}} to path
lineinfile:
dest: /etc/environment
state: present
backrefs: yes
regexp: 'PATH=(["]*)((?!.*?{{extra_path}}).*?)(["]*)$'
line: "PATH=\1\2:{{extra_path}}\3"
First of all, I don't quite understand how should I exactly modify this.
Should I replace just the extra_path or the whole {{extra_path}} with my path (/usr/pgsql-10/bin).
I tried either way and I get different errors.
To makes matters worse, my /etc/environment doesn't even contain PATH.
Declare additional path only
vars:
extra_path: /usr/pgsql-10/bin
The tasks below are based on the idea from Response to updating PATH with ansible - system wide
If the file is at the controller test the local file
- name: 'Add {{ extra_path }} if PATH does not exist'
lineinfile:
path: /etc/environment
line: 'PATH="{{ extra_path }}"'
insertafter: EOF
when: lookup('file', '/etc/environment') is not search('^\s*PATH\s*=')
- name: 'Add {{ extra_path }} to PATH'
lineinfile:
path: /etc/environment
regexp: 'PATH=(["])((?!.*?{{ extra_path }}).*?)(["])$'
line: 'PATH=\1\2:{{ extra_path }}\3'
backrefs: yes
If the files are on the remote hosts fetch them first. To make the play idempotent don't report changes on fetching. Fit the destination to your needs
- name: 'Fetch /etc/environment to {{ playbook_dir }}/environments'
fetch:
src: /etc/environment
dest: "{{ playbook_dir }}/environments"
changed_when: false
- name: 'Add {{ extra_path }} if PATH does not exist'
lineinfile:
path: /etc/environment
line: 'PATH="{{ extra_path }}"'
insertafter: EOF
when: lookup('file', path) is not search('^\s*PATH\s*=')
vars:
path: "{{ path_items|path_join }}"
path_items:
- "{{ playbook_dir }}"
- environments
- "{{ inventory_hostname }}"
- etc/environment
- name: 'Add {{ extra_path }} to PATH'
lineinfile:
path: /etc/environment
regexp: 'PATH=(["])((?!.*?{{ extra_path }}).*?)(["])$'
line: 'PATH=\1\2:{{ extra_path }}\3'
backrefs: yes
See Python regex.
Related
At present, I can only get the host name of the host node by going to the corresponding node to store the file locally, and then I send the stored file to the master node and set the file as a registered variable to set the corresponding node label for k8s, but for more The configuration of each node is difficult. Is there any good way to achieve it?
gather_facts: yes
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Ansible gets the current hostname
debug: var=hostvars[inventory_hostname]['ansible_hostname']
register: result
- local_action:
module: copy
content: "{{ result }}"
dest: /tmp/Current_hostname.yml
- debug: msg="{{ item }}"
with_lines: cat /tmp/Current_hostname.yml |tr -s ' '|cut -d '"' -f6 |sort -o /tmp/Modified_hostname
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Transfer file from ServerA to ServerB
synchronize:
src: /tmp/Modified_hostname
dest: /tmp/Modified_hostname
mode: pull
delegate_to: "{{ item }}"
with_items: "{{ groups['mysql'][0] }}"
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
tags: gets
become: yes
vars:
k8s_mariadb: "{{ lookup('file', '/tmp/Modified_hostname') }}"
tasks:
- name: Gets the node host information
debug: msg="the value of is {{ k8s_mariadb }}"
- name: Tag MySQL fixed node
shell: kubectl label node {{ k8s_mariadb }} harbor-master1=mariadb
ignore_errors: yes
on my hosts file, I have like 10 different groups that each has devices in it. each customer deployment, should go to a specific region and I want to specify that in a customer config file.
In my playbook, I tried to use a variable in front of hosts and my plan was to specify the hosts group in the config file.
master_playbook.yml
hosts: "{{ target_region }}"
vars:
custom_config_file: "./app_deployment/customer_config_files/xx_app_prod.yml"
xx_app_prod.yml
customer: test1
env: prod
app_port: 25073
target_region: dev
Error message I get:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'target_region' is undefined
To determine a HOST(who is not the running host) in which groups he is in u have to use a little helper:
Create a script:
#!/usr/bin/env ansible-playbook
#call like: ./showgroups -i develop -l jessie.fritz.box
- hosts: all
gather_facts: no
tasks:
- name: show the groups the host(s) are in
debug:
msg: "{{group_names}}"
After that u can run a Playbook Like:
- name: "get group memberships of host"
shell: "{{ role_path }}/files/scripts/show_groups -i {{ fullinventorypath }} -l {{ hostname }}"
register: groups
- name: "create empty list of group memberships"
set_fact:
memberships: []
- name: "fill list"
set_fact:
memberships: "{{ memberships + item }}"
with_items: groups.stdout.lines
I trying install mongodb on my server and enable authentication. But I'm stuck on adding user for auth. When I try execute playbook it fails on Add user task with output:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymongo.errors.OperationFailure: there are no users authenticated
fatal: [***]: FAILED! => {"changed": false, "msg": "unable to connect to database: there are no users authenticated"}
How can I fix it?
playbook.yml
- name: Install mongodb
apt:
name: mongodb-org
update_cache: yes
state: present
- name: Set config
template:
src: templates/mongodb.yml
dest: /etc/mongod.conf
notify: restart mongodb
- name: Install pymongo
pip:
name: pymongo
state: present
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
state: present
mongodb.yml
net:
port: {{ mongodb_port }}
bindIp: {{ mongodb_bind_ip }}
unixDomainSocket:
enabled: false
security:
authorization: enabled
If you don't have admin user in the database you need to start it with disabled security.authorization, add admin user, then restart mongodb with enabled security.authorization : https://docs.mongodb.com/manual/tutorial/enable-authentication/#procedure
After that you can add more users using admin's credentials:
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
login_user: "{{ admin_login }}"
login_password: "{{ admin_password }}"
state: present
https://docs.ansible.com/ansible/2.4/mongodb_user_module.html
I want to execute a postgresql sql which is fetched from my git resource.
I want to first fetch all the scripts from git repository and then wants to execute scripts from a particular directory which i taking as an input in ansible script.This is my complete ansible script
- hosts: "{{ HOST }}"
become: true
become_user: "admin"
become_method: su
gather_facts: true
vars:
app_path: "{{ app_path }}"
app_dir: "{{ app_dir }}"
tasks:
- stat:
path: "{{ app_path }}/{{ app_dir }}"
register: exist
- name: create "{{ app_path }}/{{ app_dir }}" directory
file: dest="{{ app_path }}/{{ app_dir }}" state=directory
when: exist.stat.exists != True
- git:
repo: http://git-url/postgres-demo.git
dest: "{{ app_path }}/{{ app_dir }}"
version: "{{ GIT_TAG }}"
refspec: '+refs/heads/{{ GIT_TAG }}:refs/remotes/origin/{{ GIT_TAG }}'
update: yes
force: true
register: cloned
- name: Execute postgres dump files
shell: "/usr/bin/psql -qAtw -f {{ item }}"
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
register: postgres_sql
become: true
become_user: "postgres"
become_method: su
Above script is executed successfully but postgres step throwing me following waning:
[WARNING]: Unable to find '/home/admin/postgres-dev/test' in expected paths.
When i checked my postgresql db,I don't find the table that i want to create using this scripts.
All lookups in Ansible are executed on local ansible host.
So:
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
uses fileglob lookup, which searches files on localhost!
You should refactor this code to use find module first to find all required scripts, register its output and then loop with with_items over registered variable to execute scripts.
I have a yml file for variables which goes like this.
- newHosts
- hostIP: 192.168.1.22
filename: file1
- hostIP: 192.168.1.23
filename: file2
I am using add_host: {{ item.hostIP }} with_items {{ newHosts }}.
I want to copy respective file to respective hosts with something like {{ item.filename }} but it copies all files to each host. How I just copy only the corresponding file to the node. How can I do that?
You can use conditionals that are applied at each iteration of the loop for example:
- hosts: all
tasks:
- name: copy file to appropriate server
copy: src={{item.filename}} dest=/var/foo/{{item.filename}}
with_items: newHosts
when: item.hostIP == ansible_ssh_host