setting environment variables in ansible permanently - deployment

i am using ansible to add permanent environment variables in ubuntu bashrc .
i have these settings defined in prod_vars file:
enviornment_variables:
PRODUCTION:
MONGO_IP: 0.0.0.0
MONGO_PORT: 27017
ELASTIC_IP: localhost
ELASTIC_PORT: 9200
how can i export it using a task? i kniow about lineinfile module but i do not want to repeat for every env var
- name: set env in the bashrc files
lineinfile: dest=/home/user/.bashrc line='export MONGO_IP=enviornment_variables[PRODUCTION][MONGO_IP]'
also above command gives synatx error?

Instead of using lineinfile module, use the blockinfile module.
So something like this should work:
- name: Adding to environment variables for user
blockinfile:
path: /home/user/.bashrc
insertafter: EOF
block: |
export {{ item.key }}={{ item.val }}
marker: "# {mark} {{ item.key }}"
with_dict:
"{{ enviornment_variables['PRODUCTION'] }}"
ps: The spelling error in "environment" literally took 20+ minutes for me to identify!

Related

How to add a callback-plugin to AWX docker

Installed AWX docker from here - https://github.com/ansible/awx. I am trying to add a callback-plugin for a specific project as written here - https://docs.ansible.com/ansible-tower/latest/html/administration/tipsandtricks.html#using-callback-plugins-with-tower. Does not work. I add to Template-> EXTRA VARIABLES lines
---
bin_ansible_callbacks: true
callback_plugins: /callback_plugins
stdout_callback: selective
Does not work.
I add the directory /var/lib/awx/projects/test/callback_plugins/ to SETTINGS-> JOBS-> ANSIBLE CALLBACK PLUGINS - it doesn't work either.
Tell me, please, how to do it correctly, so that another (custom) plugin picks up and earns.
I'm issuing the same problem, after some debugs the problem I've open a issue on AWX project https://github.com/ansible/awx/issues/4149
In the meantime I've applied a workaround that consists in create a symlinks for each callback plugin you want to use in the callback_plugins folder of your roles project
For example, if you are using the ara project
- name: Research for callbacks in virtualenv libs
find:
path: '{{ ansible_playbook_python|dirname|dirname }}/{{ item }}'
file_type: file
depth: 1
patterns: '*.py'
excludes: '__init__*'
register: _internal__callbacks
with_items:
- lib/python3.6/site-packages/ara/plugins/callbacks
# TODO : prevent existing callbacks to be overwritten
- name: Create symlinks from virtualenv lib directory to local callback_plugins/
file:
src: '{{ item }}'
dest: '{{ playbook_dir }}/callback_plugins/{{ item|basename }}'
state: link
with_items: "{{ _internal__callbacks.results|map(attribute='files')|flatten|map(attribute='path')|list }}"
seems like you have to use callbacks_enabled instead of callback_plugins. put this example configuration in the /var/lib/awx/ansible.cfg file:
[defaults]
callback_whitelist = profile_tasks
--- works on AWX 17.x

Ansible: add items to single string conditionally

Currently working on ansible playbook for postgresql installation.
Trying to insert multiple values to shared_preload_libraries. Idea is to be able to insert them conditionnaly and different roles must be able to append to this line without overwriting it.
Current solution:
- name: Check whether postgresql.conf contains "pg_stat_statements"
command: grep "pg_stat_statements" "{{ pg_data }}/postgresql.conf"
register: check_shared_libs
check_mode: no
ignore_errors: yes
changed_when: no
- name: Modify postgresql.conf add pg_stat_statements shared library.
replace:
backup: yes
dest: "{{ pg_data }}/postgresql.conf"
regexp: "^#?(shared_preload_libraries = '.*)(')"
replace: '\1,pg_stat_statements\2'
when: check_shared_libs.rc != 0
notify: restart postgres
which gives me something like this: shared_preload_libraries = ',pg_stat_statements,powa,pg_stat_kcache,pg_qualstats,pg_wait_sampling'
Works fine, but I wonder if there's any better way to do it.
one way to avoid using the command task is the below. it comprises of 2 tasks because reading a remote file into a variable cant be done with a lookup plugin..
so, step1 we read the whole file into a variable and step2 we check if the variable contains the pg_stat_statements:
code:
---
- name: test play
hosts: localhost
connection: local
gather_facts: false
become: yes
vars:
pg_stat_statements_exists: false
tasks:
- name: read file
slurp:
src: /tmp/postgresql.conf
register: postgresql_file
- name: check pg_stat_statements exists
set_fact:
pg_stat_statements_exists: true
when: postgresql_file['content'] | b64decode is search('pg_stat_statements') == true
- name: print variable that will control the replace task that follows
debug:
var: pg_stat_statements_exists
and adding your replace task with the modified when condition:
- name: Modify postgresql.conf add pg_stat_statements shared library.
replace:
backup: yes
dest: "{{ pg_data }}/postgresql.conf"
regexp: "^#?(shared_preload_libraries = '.*)(')"
replace: '\1,pg_stat_statements\2'
when: pg_stat_statements_exists == true
notify: restart postgres
hope it helps

Ansible - need to output all the hosts in the playbook into a configuration file

I'll try to make this brief... I'm setting up ansible to write a PostgreSQL pg_hba.conf file, and what I want to do is permit any db server to replicate to any other db server. This way I don't have to reconfigure in the event of a failure. I want ansible to insert lines for each host listed in the group "db". These entries must be CIDR types. So, far I've only succeeded in getting each system to show their own CIDR in the file. I've looked extensively with no joy, but here's what I'm trying to use:
- name: Update the pg_hba.conf file
lineinfile:
path: '{{ pg_data }}/{{ pg_cluster_name }}/pg_hba.conf'
regexp: 'hostssl replication'
insertafter: 'hostssl replication'
line: "hostssl replication rplctn_usr {{ hostvars[ '{{ item }}' ]['ansible_default_ipv4']['address'] }}/32 md5"
with_items: groups['db']
tags:
- "pg_hba.conf"
Nothing I've done gets the {{ item }} variable to expand properly. Anyone?
Firstly, you need to reference the var to iterate through with braces:
with_items: "{{ groups['db'] }}"
Second, item is the var representing the value of each iteration. Inside {{ }} you can reference any vars directly without extra braces:
{{ hostvars[item]['ansible_default_ipv4']['address'] }}

Install all packages from a folder with Ansible 2.0

I have a folder where I place unmaintained python packages, so I installed them from the zip rather than from their repository.
I am using Ansible 2.0, so the find command seems to be the way to do it
So far I was doing the following:
- name: Install unmaintained dependencies
pip:
name: "{{ my_project_app }}/requirements/{{ item }}"
virtualenv: "{{ my_project_venv }}"
with_items:
- django-hijack-2.0.0.zip
- django-image-cropping-django-19.zip
- pisa-3.0.33.zip
Now I'm playing with the find command
- name: Loading unmaintained dependencies
find:
paths: "{{ my_project_app }}/requirements"
patterns: "*.zip"
register: unmaintained_dependencies
- debug:
var: unmaintained_dependencies
If I run the playbook above, I get the following output
ok: [192.168.1.212] => {
"files_found": {
"changed": false,
"examined": 3,
"files": [
{
...
"path": "/data/my_project/requirements/pisa-3.0.33.zip",
...
},
...
],
"matched": 3,
"msg": ""
}
}
I guess that there must be a way to put everything together but here is where I'm stuck.
I still don't get what are you trying to achieve here yet.. Are you using the find module just because pip module doesn't install packages from zip files?
For your find workaround, you can create a task that iterates over the results of the find task, using with_items: files_found.files, and using {{ item.path }} whenever you need the path:
- name: Install unmaintained dependencies
pip:
name: "{{ item.path }}"
virtualenv: "{{ my_project_venv }}"
with_items: "{{files_found.files}}"
Also, instead of using file, you can try to make a loop using with_fileglob:
- pip:
name: {{ item }}
virtualenv: "{{ my_project_venv }}"
with_fileglob:
- "{{ my_project_app }}"/requirements/*.zip
Note, I didn't have time to test any of these solutions, or to ask more about what were you trying to achieve, but I hope they help with your problem.

Ansible command from inside virtualenv?

This seems like it should be really simple:
tasks:
- name: install python packages
pip: name=${item} virtualenv=~/buildbot-env
with_items: [ buildbot ]
- name: create buildbot master
command: buildbot create-master ~/buildbot creates=~/buildbot/buildbot.tac
However, the command will not succeed unless the virtualenv's activate script is sourced first, and there doesn't seem to be provision to do that in the Ansible command module.
I've experimented with sourcing the activate script in various of .profile, .bashrc, .bash_login, etc, with no luck. Alternatively, there's the shell command, but it seems like kind of an awkward hack:
- name: create buildbot master
shell: source ~/buildbot-env/bin/activate && \
buildbot create-master ~/buildbot \
creates=~/buildbot/buildbot.tac executable=/bin/bash
Is there a better way?
The better way is to use the full path to installed script - it will run in its virtualenv automatically:
tasks:
- name: install python packages
pip: name={{ item }} virtualenv={{ venv }}
with_items: [ buildbot ]
- name: create buildbot master
command: "{{ venv }}/bin/buildbot create-master ~/buildbot
creates=~/buildbot/buildbot.tac"
This is a genericized version of the wrapper method.
venv_exec.j2:
#!/bin/bash
source {{ venv }}/bin/activate
$#
And then the playbook:
tasks:
- pip: name={{ item }} virtualenv={{ venv }}
with_items:
- buildbot
- template: src=venv_exec.j2 dest={{ venv }}/exec mode=755
- command: "{{ venv }}/exec buildbot create-master {{ buildbot_master }}"
Here's a way to enable the virtualenv for an entire play; this example builds the virtualenv in one play, then starts using it the next.
Not sure how clean it is, but it works. I'm just building a bit on what mikepurvis mentioned here.
---
# Build virtualenv
- hosts: all
vars:
PROJECT_HOME: "/tmp/my_test_home"
ansible_python_interpreter: "/usr/local/bin/python"
tasks:
- name: "Create virtualenv"
shell: virtualenv "{{ PROJECT_HOME }}/venv"
creates="{{ PROJECT_HOME }}/venv/bin/activate"
- name: "Copy virtualenv wrapper file"
synchronize: src=pyvenv
dest="{{ PROJECT_HOME }}/venv/bin/pyvenv"
# Use virtualenv
- hosts: all
vars:
PROJECT_HOME: "/tmp/my_test_home"
ansible_python_interpreter: "/tmp/my_test_home/venv/bin/pyvenv"
tasks:
- name: "Guard code, so we are more certain we are in a virtualenv"
shell: echo $VIRTUAL_ENV
register: command_result
failed_when: command_result.stdout == ""
pyenv wrapper file:
#!/bin/bash
source "$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/activate"
python $#
Just run the virtualenvs pip in a shell:
shell: ~/buildbot-env/pip install ${item}
Works like a charm. I have no idea what the pip module does with virtualenvs, but it seems pretty useless.
As I commented above, I create a script, say it is called buildbot.sh:
source ~/buildbot-env/bin/activate
buildbot create-master [and more stuff]
Then run it on the remote with a task like this:
- name: Create buildbot master
script: buildbot.sh
To me this still seems unneccessary, but it maybe cleaner than running it in a shell command. Your playbook looks cleaner at the cost of not seeing immediately what the script does.
At least some modules do seem to use virtualenv, as both django_manage and rax_clb already have an inbuilt virtualenv parameter. It may not be such a big step for Ansible to include a command-in-virtenv sort of module.