Ansible - default to everything when no arguments are specified - deployment

I have a fairly large playbook that is capable of updating up to 10 services on a given host.
Let's say I have the services a b c d and I'd like to be to be able to selectively update the services by passing command line arguments, but default to updating everything when no arguments are passed. How could you do this in Ansible without being able to drop into arbitrary scripting?
Right now what I have is a when check on each service and define whether or not the service is true at playbook invocation. Given I may have as many as 10 services, I can't write boolean logic to accommodate every possibility.
I was hoping there is maybe a builtin like $# in bash that lists all arguments and I can do a check along the lines of when: $#.length = 0
ansible-playbook deploy.yml -e "a=true b=true d=true"
when: a == "true"
when: b == "true"
when: c == "true"
when: d == "true"

I would suggest to use tags. Lets say we have two services for example nginx and fpm. Then tag the play for nginx with nginx and for fpm with fpm. Below is an example for task level tagging, let say its named play.yml
- name: tasks for nginx
service: name=nginx state=reloaded
tags:
- nginx
- name: tasks for php-fpm
service: name=php-fpm state=reloaded
tags:
- fpm
Exeucting ansible-playbook play.yml will by default run both the tasks. But, If i change the command to
ansible-playbook play.yml --tags "nginx"
then only the task with nginx tag is executed. Tags can also be applied over play level or role level.
Play level tagging would look like
- hosts: all
remote_user: user
tasks:
- include: play1.yml
tags:
- play1
- include: play2.yml
tags:
- play2
In this case, all tasks inside the playbook play1.yml will inherit the tag play1 and the same for play2. While running ansible-playbook with the tag play1 then all tasks inside play1.yml are executed. Rather if we dont specify any tag all tasks from play1 and play2 are executed.
Note: A tasks is not limited to just one tag.

If you have a single play that you want to loop over the services, define that list in group_vars/all or somewhere else that makes sense:
services:
- first
- second
- third
- fourth
Then your tasks in start_services.yml playbook can look like this:
- name: Ensure passed variables are in services list
fail:
msg: "{{ item }} not in services list"
when: item not in services
with_items: "{{ varlist | default(services) }}"
- name: Start services
service:
name: "{{ item }}"
state: started
with_items: "{{ varlist | default(services) }}"
Pass in varlist as a JSON array:
$ ansible-playbook start_services.yml --extra-vars='{"varlist":[first,third]}'

Related

Having issue with delegate_to in playbook calling a role to only run on one host in a list

I have a playbook and only want to run this play on the first master node. I tried moving the list into the role but did not see to work. Thanks for your help!
## master node only changes
- name: Deploy change kubernetes Master
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
delegate_to: "{{ groups['masters'][0] }}"
ERROR! 'delegate_to' is not a valid attribute for a Play
The error appears to be in '/mnt/win/kubernetes.playbook/deploy-kubernetes.yml': line 11, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
master node only changes
name: Deploy change kubernetes Master
^ here
In one playbook, create a new group with this host in the first play and use it in the second play. For example,
shell> cat playbook.yml
- name: Create group with masters.0
host: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ groups.masters.0 }}"
groups: k8s_master_0
- name: Deploy change kubernetes Master
hosts: k8s_master_0
remote_user: tyboard
become: true
roles:
- role: gd.kubernetes.master.role
files_location: ../files
(not tested)
Fix the role name
If files_location is a variable which shall be used in the role's scope put it into the vars. For example
roles:
- role: gd.kubernetes.master.role
vars:
files_location: ../files

How to skip a step for Argo workflow

I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document.
I think you should consider using Conditions and Artifact passing in your steps.
Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with
$ argo submit examples/conditionals.yaml
the step will be skipped since 'should-print' will evaluate false.
When submitted with:
$ argo submit examples/conditionals.yaml -p should-print=true
the step will be executed since 'should-print' will evaluate true.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
If you use conditions in each step you will be able to start from a step you like with appropriate condition.
Also have a loot at this article Argo: Workflow Engine for Kubernetes as author explains the use of conditions on coinflip example.
You can see many examples on their GitHub page.

How to include a dictionary into a k8s_raw data field

I'm writing an Ansible-playbook to insert a list of secret object into Kubernetes.
I'm using k8s_raw syntax and I want to import this list from a group_vars file.
I can't find the right syntax to import the list of secret into my data field.
playbook.yml
- hosts: localhost
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
SKRT: "c2trcnIK"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
vars_files:
- "varfile.yml"
varfile.yml
secrets:
TAMAGOTCHI_CODE: "MTIzNAo="
FRIDGE_PIN: "MTIzNAo="
First, what does it actually say when you attempt the above? It would help to have the result of your attempts.
Just guessing but try moving the var_files to before the place where you try to use the variables. Also, be sure that your indentation is exactly right when you do.
- hosts: localhost
vars_files:
- /varfile.yml
tasks:
- name: Create a Secret object
k8s_raw:
state: present
definition:
apiVersion: v1
kind: Secret
data:
"{{ secrets }}"
metadata:
name: "test"
namespace: "namespace-test"
type: Opaqueroot
Reference
side note: I would debug this immediately without attempting the task. Remove your main task and after trying to use vars_files, attempt to directly print the secrets using the debug play. This will allow you to fine tune the syntax and keep fiddling with it until you get it right without having to run and wait for the more complex play that follows. Reference.
To import this list from a group_vars file
Put the localhost into a group. For example a group test
> cat hosts
test:
hosts:
localhost:
Put the varfile.yml into the group_vars/test directory
$ tree group_vars
group_vars/
├── test
  └── varfile.yml
Then running the playbook below
$ cat test.yml
- hosts: test
tasks:
- debug:
var: secrets.TAMAGOTCHI_COD
$ ansible-playbook -i hosts test.yml
gives:
PLAY [test] ***********************************
TASK [debug] **********************************
ok: [localhost] => {
"secrets.TAMAGOTCHI_CODE": "MTIzNAo="
}
PLAY RECAP *************************************
localhost: ok=1 changed=0 unreachable=0 failed=0
The problem was the SKRT: "c2trcnIK" field just under the "{{ secrets }}" line. I deleted it and now it works ! Thank you all.

Ansible use current hosts for task as a variable

I have the following code that connects to the logger service haproxies and drains the first logger VM.
Then in a separate task connects to the logger lists of hosts, where the first host is drained and does a service reload.
- name: Haproxy Warmup
hosts: role_max_logger_lb
tasks:
- name: analytics-backend 8300 range
haproxy: 'state=disabled host=maxlog-rwva1-{{ env }}-1.example.com backend=analytics-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: logger-backend 8200
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com :8200 backend=logger-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: Warmup Deploy
hosts: "role_max_logger"
serial: 1
tasks:
- shell: pm2 gracefulReload max-logger
when: warmup is defined and buildnum is defined
- pause: prompt="First host has been deployed to. Please verify the logs before continuing. Ctrl-c to exit, Enter to continue deployment."
when: warmup is defined and buildnum is defined
This code is pretty bad and doesn't work when I try to expand it to do a rolling restart for several services with several haproxies. I'd need to somehow drain 33% of all the app VMs from the haproxy backend and then connect to a different list and do the 33% reboot process there. Then resume at 34-66% of the draining list and then resume at 34% and 66% on the reboot list.
- name: 33% at a time drain
hosts: "role_max_logger_lb"
serial: "33%"
tasks:
- name: analytics-backend 8300 range
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com
backend=analytics-backend socket=/var/run/admin.sock'
become: true
when: warmup is defined and buildnum is defined
- name: logger-backend 8200
haproxy: 'state=disabled host=maxlog-rwva1-prod-1.example.com:8200 backend=logger-backend socket=/var/run/admin.sock'
become: true
when: buildnum is defined and service is defined
- name: 33% at a time deploy
hosts: "role_max_logger"
serial: "33%"
tasks:
- shell: pm2 gracefulReload {{ service }}
when: buildnum is defined and service is defined
- pause: prompt="One third of machines in the pool have been deployed to. Enter to continue"
I could do this much easier in Chef, just query the chef server for all nodes registered in a given role and do all my logic in real ruby. If it matters the host lists I'm calling here are actually ripped from my Chef server and fed in as json.
I don't know what the proper Ansible way of doing this without being able to drop into arbitrary scripting to do all the dirty work.
I was thinking maybe I could do something super hacky like this inside of the of a shell command in Ansible under the deploy, which might work if there is a way of pulling the current host that is being processed out of the host list, like an Ansible equivalent of node['fqdn'] in Chef.
ssh maxlog-lb-rwva1-food-1.example.com 'echo "disable server logger-backend/maxlog-rwva1-food-1.example.com:8200" | socat stdio /run/admin.sock'
Or maybe there is a way I can wrap my entire thing in a serial 33% and include sub-plays that do things. Sort of like this, but again I don't know how to properly pass around a thirded list of my app servers within the sub-plays
- name: Deployer
hosts: role_max_logger
serial: "33%"
- include: drain.yml
- include: reboot.yml
Basically I don't know what I'm doing, I can think of a bunch of ways of trying to do this but they all seem terrible and overly obtuse. If I were to go down these hacky roads I would probably be better off just writing a big shell script or actual ruby to do this.
Reading lots of official Ansible documentation for this has overly simplified examples that don't really map to my situation.
Particularly here where the load balancer is on the same host as the app server.
- hosts: webservers
serial: 5
tasks:
- name: take out of load balancer pool
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
http://docs.ansible.com/ansible/playbooks_delegation.html
I guess my questions are:
Is there an Ansible equivalent of Chef's node['fqdn'] to use the currently being processed host as a variable
Am I just completely off the rails for how I'm trying to do this?
Is there an Ansible equivalent of Chef's node['fqdn'] to use the currently being processed host as a variable
ansible_hostname, ansible_fqdn (both taken from the actual machine settings) or inventory_hostname (defined in the inventory file) depending which you want to use.
As you correctly noted, you need to use delegation for this task.
Here is some pseudocode for you to start with:
- name: 33% at a time deploy
hosts: role_max_logger
serial: 33%
tasks:
- name: take out of lb
shell: take_out_host.sh --name={{ inventory_hostname }}
delegate_to: "{{ item }}"
with_items: "{{ groups['role_max_logger_lb'] }}"
- name: reload backend
shell: reload_service.sh
- name: add back to lb
shell: add_host.sh --name={{ inventory_hostname }}
delegate_to: "{{ item }}"
with_items: "{{ groups['role_max_logger_lb'] }}"
I assume that group role_max_logger defines servers with backend services to be reloaded and group role_max_logger_lb defines servers with load balancers.
This play take all hosts from role_max_logger, splits them into 33% batches; then for each host in the batch it executes take_out_host.sh on each of load balancers passing current backend hostname as parameter; after all hosts from current batch are disabled on load balancers, backend services are reloaded; after that, hosts are added back to LB as in the first task. This operation is then repeated for every batch.

How to make this ansible chkconfig task idempotent ?

I have a ansible task like this in my playbook to be run against a centos server:
- name: Enable services for automatic start
action: command /sbin/chkconfig {{ item }} on
with_items:
- nginx
- postgresql
This task changes every time I run it. How do I make this task pass the idempotency test ?
The best option is to use the enabled=yes with service module:
- name: Enable services for automatic start
service:
name: "{{ item }}"
enabled: yes
with_items:
- nginx
- postgresql
Hope that help you.