Copy file to ansible host with custom variables substituted - copy

I'm working on an ansible-playbook which should help to generate build agents for a continuous delivery pipeline. Among other issues, I'll need to install an oracle client on such an agent. I want to do something like
- name: "Provide response file"
copy: src=/custom.rsp dest=/opt/oracle
Within the custom.rsp file I've got some variables to be substituted. Normally, one could do it with a separate shell command like this:
- name: "Substitute Vars"
shell: "sed 's|<PARAMETER>|<VALUE>|g' -i /opt/oracle/custom.rsp"
I don't like it, though. There should be a more convinient way to do this. Anybody giving me a hint?

You want to be using a template rather than copying a static file.
Also, when using the copy or template modules, the dest parameter is a full path AND filename, not just a path. So if you want to end up with a copy of custom.rsp in the directory /opt/oracle then you need to do this:
- name: "Provide response file"
template: src=/custom.rsp dest=/opt/oracle/custom.rsp

I'm going to extend Bruce's answer with an example:
This is part of my inventory.yaml:
kafka_stage:
children:
kafka_with_zookeeper_stage:
kafka_only_stage:
vars:
zookeeper_hosts: "kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181"
kafka_with_zookeeper_stage:
hosts:
kafka-stage01:
broker_id: 0
kafka-stage02:
broker_id: 1
vars:
services:
kafka:
zookeeper:
This is part of a configuration file:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id={{ broker_id }}
# {{ zookeeper_hosts }}
advertised.listeners=PLAINTEXT://{{ ansible_host }}:9092
# {{ services }}
This command in a playbook:
- name: Copy to Host
ansible.builtin.template:
src: my_configfile.properties
dest: /tmp/hejsan.properties
Gave me this on the remote host kafka-stage02:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# kafka-stage01:2181,kafka-stage02:2181,kafka-stage03:2181
advertised.listeners=PLAINTEXT://kafka-stage02:9092
# {'kafka': None, 'zookeeper': None}

Related

Ansible IP Adress from Host to Variable and use in Playbook - Save Switch Config with Ansible

I want to save the startup and running config with ansible. My script works but only on one host. I need to change the name from the save file otherwise it will overwrite the old Configuration.
---
- hosts: switches
gather_facts: yes
vars:
ansible_network_os: icx
ansible_become: True
ansible_become_method: enable
tasks:
- name: Backup Config Files
icx_command:
commands:
- copy startup-config tftp 192.168.10.5 Ansible-startup-config.cfg
- copy running-config tftp 192.168.10.5 Ansible-running-config.cfg
Now I want to have the ip address, time and date in the name so that they will not be overwritten when I start the script again.
I think about something like this as filename:
192.168.9.13-2021-04-08-18:25-startup-config.cfg
or an consecutive number
How can I do this?
The rough idea below.
I'm not 100% sure facts gathered on ios populate ansible_default_ipv4. You will have to check that point and find the correct variable if needed. Note also you may have to run explicitly the icx_facts module to get all the info from your device.
You can run the following ad-hoc commands to browse all available facts and choose the correct one if ever:
ansible -i your_inventory some_host -m setup
ansible -i your_inventory some_host -m icx_facts
You can debug the ansible_date_time variable to see if an other key suits your needs better than below to create a time stamp. I reconstructed exactly the pattern in your question.
Of course, you need to gather facts for these vars to be available (gather_facts: true on your play meets that requirement).
And here is the sample task (untested, I don't have an ios device available for that)
- name: Backup Config Files
vars:
ip: "{{ ansible_default_ipv4.address }}"
stamp: "{{ ansible_date_time.date }}-{{ ansible_date_time.hour }}:{{ ansible_date_time.minute }}"
prefix: "{{ ip }}-{{ stamp }}"
icx_command:
commands:
- "copy startup-config tftp 192.168.10.5 {{ prefix }}-startup-config.cfg"
- "copy running-config tftp 192.168.10.5 {{ prefix }}-running-config.cfg"

How to set a variable from another yaml file in azure-pipeline.yml

I have an environment.yml shown as follow, I would like to read out the content of the name variable (core-force) and set it as a value of the global variable in my azure-pipeline.yamal file how can I do it?
name: core-force
channels:
- conda-forge
dependencies:
- click
- Sphinx
- sphinx_rtd_theme
- numpy
- pylint
- azure-cosmos
- python=3
- flask
- pytest
- shapely
in my azure-pipeline.yml file I would like to have something like
variables:
tag: get the value of the name from the environment.yml aka 'core-force'
Please check this example:
File: vars.yml
variables:
favoriteVeggie: 'brussels sprouts'
File: azure-pipelines.yml
variables:
- template: vars.yml # Template reference
steps:
- script: echo My favorite vegetable is ${{ variables.favoriteVeggie }}.
Please note, that variables are simple string and if you want to use list you may need do some workaraund in powershell in place where you want to use value from that list.
If you don't want to use template functionality as it is shown above you need to do these:
create a separate job/stage
define step there to read environment.yml file and set variables using REST API or Azure CLI
create another job/stage and move you current build defitnion into there
I found this topic on developer community where you can read:
Yaml variables have always been string: string mappings. The doc appears to be currently correct, though we may have had a bug when last you visited.
We are preparing to release a feature in the near future to allow you to pass more complex structures. Stay tuned!
But I don't have more info bout this.
Global variables should be stored in a separate template file. This file ideally would be in a separate repo where other repos can refer to this.
Here is another answer for this

How can I get awx to use azure_rm.yml instead of the ini version?

I need to use the "plugin/azure_rm.yml" version of azure_rm instead of the older/deprecated "script/azure_rm.ini" to gather dynamic inventory in Azure, in particular because we need VMs from scalesets to be included in inventory. How can I do this?
You can use an inventory script, which (again) calls ansible-inventory, and have it create the config file via a heredoc:
#!/usr/bin/env bash
cat > azure_rm.yml <<HEREDOC
---
plugin: azure_rm
include_vmss_resource_groups:
- '*'
hostvar_expressions:
ansible_host: private_ipv4_addresses | first
plain_host_names: true
keyed_groups:
# places each host in a group named 'tag_(tag name)_(tag value)' for each tag on a VM.
- prefix: tag
key: tags
# places each host in a group named 'azure_loc_(location name)', depending on the VM's location
- prefix: azure_loc
key: location
# group by platform (to copy prefix from ec2.py), eg: platform_windows
- prefix: platform
key: os_disk.operating_system_type
HEREDOC
ansible-inventory -i azure_rm.yml --list
rm azure_rm.yml
So it's literally having ansible-inventory call ansible-inventory, but with a different argument. Note that in order to get the inventory for the correct subscription, you have to create a copy the credential used, with the desired subscription id; it doesn't appear that you can override AZURE_SUBSCRIPTION_ID via the yml environment param.

Ansible passing variables between host contexts

As the title says when I'd like to be able to pass a variable that is registered under one host group to another, but I'm not sure how to do that and I couldn't find anything relevant under the variable documentation http://docs.ansible.com/ansible/playbooks_variables.html
This is a simplified example of what I am trying to see. I have a playbook that calls many different groups and checks where a symlink points. I'd like to be able to report all of the symlink targets to console at the end of the play.
The problem is the registered value is only valid under the host group that it was defined in. Is there a proper way of exporting these variables?
---
- hosts: max_logger
tasks:
- shell: ls -la /home/ubuntu/apps/max-logger/active | awk -F':' '{print $NF}'
register: max_logger_old_active
- hosts: max_data
tasks:
- shell: ls -la /home/ubuntu/apps/max-data/active | awk -F':' '{print $NF}'
register: max_data_old_active
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ max_logger_old_active.stdout }}
The old max_data build is {{ max_data_old_active.stdout }}"
You don't need to pass anything here (you just need to access). Registered variables are stored as host facts and they are stored in memory for the time the whole playbook is run, so you can access them from all subsequent plays.
This can be achieved using magic variable hostvars.
You need however to refer to a host name, which doesn't necessarily match the host group name (e.g. max_logger) which you posted in the question:
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ hostvars['max_logger_host'].max_logger_old_active.stdout }}
The old max_data build is {{ hostvars['max_data_host'].max_data_old_active.stdout }}"
You can also write hostvars['max_data_host']['max_data_old_active']['stdout'].

How dynamic map service name to ENV var

Example:
my-server:
image: my-server:latest
ports:
- 1234:1234
proxy:
image: lb:latest
environment:
- BACKEND=${VAR}??? # must be resolve as 'my-server'
The server name can be changed to any name, but the proxy has a entry-point script where the variable will be substituted in the BACKEND to config.
You can use a .env file to define your variable. This file will be placed in the same directory as your docker-compose.yml file.
When you run docker-compose, it will read this value and use it. Using your example, your .env file would look something like this:
VAR=my-server
and, the line:
- BACKEND=${VAR}??? # must be resolve as 'my-server'
would become just:
- BACKEND=${VAR}
or
BACKEND: ${VAR}