run ansible task only if tag is NOT specified - tags

Say I want to run a task only when a specific tag is NOT in the list of tags supplied on the command line, even if other tags are specified. Of these, only the last one will work as I expect in all situations:
- hosts: all
tasks:
- debug:
msg: 'not TAG (won't work if other tags specified)'
tags: not TAG
- debug:
msg: 'always, but not if TAG specified (doesn't work; always runs)'
tags: always,not TAG
- debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always
Try it with different CLI options and you'll hopefully see why I find this a bit perplexing:
ansible-playbook tags-test.yml -l HOST
ansible-playbook tags-test.yml -l HOST -t TAG
ansible-playbook tags-test.yml -l HOST -t OTHERTAG
Questions: (a) is that expected behavior? and (b) is there a better way or some logic I'm missing?
I'm surprised I had to dig into the (undocumented, AFAICT) variable ansible_run_tags.
Amendment: It was suggested that I post my actual use case. I'm using ansible to drive system updates on Debian family systems. I'm trying to notify at the end if a reboot is required unless the tag reboot was supplied, in which case cause a reboot (and wait for system to come back up). Here is the relevant snippet:
- name: check and perhaps reboot
block:
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
get_md5: no
register: reboot
tags: always,reboot
- name: Alert if a reboot is required
fail:
msg: "NOTE: a reboot required to finish uppdates."
when:
- ('reboot' not in ansible_run_tags)
- reboot.stat.exists
tags: always
- name: Reboot the server
reboot:
msg: rebooting after Ansible applied system updates
when: reboot.stat.exists or ('force-reboot' in ansible_run_tags)
tags: never,reboot,force-reboot
I think my original question(s) still have merit, but I'm also willing to accept alternative methods of accomplishing this same functionality.

For completeness, and since only #paul-sweeney has offered any alternative solution, I'll answer my own question with my current best solution and let people pick / up-vote their favorite:
---
- name: run only if 'TAG' not specified
debug:
msg: 'ALWAYS, but not if TAG in ansible_run_tags'
when: "'TAG' not in ansible_run_tags"
tags: always

I know it's an old(ish) question, but I had a similar requirement.
It's probably something best implemented another way ... but ... sometimes it can be useful.
I'd achieve it by setting a fact if the tag IS specified, then outputting the message only if the fact is not set, something like:
---
- name: "test task runs only if tag missing"
hosts: all
tasks:
- name: "suppress message if tag given"
set_fact: suppress_message=yes
tags: reboot,never
- name: "message"
debug:
msg: "You didn't say 'reboot'"
when: suppress_message is not defined

I think that we have states for controlling (example: started, restarted, stopped), states for installing (present,absent) and components (webserver, db,...).
Ansible is lacking a good separation of those 3 dimensions and mixing those 3 dimensions in a single tag system is leading to confusion.
For example, if you have a 'webserver' and a 'DB' tag, you want to 'restart' the DB and not the webserver using a 'restart' tag.
But it won't work if the 'restart' tasks of the DB and the webserver are in the same tasks file with the same 'restart' tag as the 'restart' tag will start both the DB and the webserver...
So you will have probably to separate webserver and DB tasks in 2 separate files and use the tag at the level of the include.
Using tags means that you have a tree of options, not a matrix of options.
I like the tag concept but the fact that it is not possible to use it in conditional expressions is making it less appealing.
What I recommend is to declare tags in a role but map them into variables as a first task. So the 'restart' and 'db' tags would become boolean variables in my role and use when: instead of tags:

ansible-playbook has a skip-tags option. The example from the docs is
ansible-playbook example.yml --skip-tags "packages"
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html

Related

How to use conditions in OpenAPI Generator's Mustache templates?

I'm using OAS3 generator for Java as a Maven plugin to generate POJOs, controllers, delegates etc for my APIs with the Mustache templates from the openapi-generator repository: https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/main/resources/JavaSpring/apiController.mustache
I'm trying to edit this template so that the "#Controller" annotation is generated only if a condition is met. I've searched for multiple solutions for this and one of them was using "vendorExtensions".
I've made the following contract with x-generateController vendorExtension:
openapi: 3.0.0
info:
title: User API
description: API for user changes
contact:
name: xxx
url: xxx
email: xxx
license:
name: xxx
url: xxx
version: 1.0.0
tags:
- name: user
x-generateController: True
paths:
/users:
...
And then in the Mustache template file I have put the following:
{{#vendorExtensions.x-generateController}}
#Controller("{{classname}}")
{{/vendorExtensions.x-generateController}}
The generator works just fine without this condition but it seems like it doesn't take into account x-generateController. In fact, if I try to just place it as a comment like this:
// {{vendorExtensions.x-generateController}}
I get only "// " and an empty space. I've also tried putting it at the "endpoint level" and not in the "info level" and the problem is the same.
Is there anything more that I should've done in the configuration? Is there any alternative for a condition in the Mustache template?

How do I modify existing jobs to switch owner?

I installed Rundeck v3.3.5 (on CentOS 7 via RPM) to replace an old Rundeck instance that was decommissioned. I did the export/import of projects (which worked brilliantly) while connected to the new server as the default admin user. The imported jobs run properly on the correct schedule. I subsequently configured the new server to use LDAP authentication and configured ACLs for users/roles. That also works properly.
However, I see an error like this in the service.log:
ERROR services.NotificationService - Error sending notification email to foo#bar.com for Execution 9358 Error executing tag <g:render>: could not initialize proxy [rundeck.Workflow#9468] - no Session
My thought is to switch job owners from admin to a user that exists in LDAP. I mean, I would like to switch job owners regardless, but I'm also hoping it addresses the error.
Is there a way in the web interface or using rd that I can bulk-modify jobs to switch the owner?
It turns out that the error in the log was caused by notification settings in an included job. I didn't realize that notifications were configured on the parameterized shared job definition, but there were; removing the notification settings caused the error to stop being added to /var/log/rundeck/service.log.
To illustrate the problem, here are chunks of YAML I've edited to show just the important parts. Here's the common job:
- description: Do the actual work with arguments passed
group: jobs/common
id: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
name: do_the_work
notification:
onstart:
email:
attachType: file
recipients: ops#company.com
subject: Actual work being started
notifyAvgDurationThreshold: null
options:
- enforced: true
name: do_the_job
required: true
values:
- yes
- no
valuesListDelimiter: ','
- enforced: true
name: fail_a_lot
required: true
values:
- yes
- no
valuesListDelimiter: ','
scheduleEnabled: false
sequence:
commands:
- description: The actual work
script: |-
#!/bin/bash
echo ${RD_OPTION_DO_THE_JOB} ${RD_OPTION_FAIL_A_LOT}
keepgoing: false
strategy: node-first
timeout: '60'
uuid: a618ceb6-f966-49cf-96c5-03a0c2efb9d8
And here's the job that calls it (the one that is scheduled and causes an error to show up in the log when it runs):
- description: Do the job
group: jobs/individual
name: do_the_job
...
notification:
onfailure:
email:
recipients: ops#company.com
subject: '[Rundeck] Failure of ${job.name}'
notifyAvgDurationThreshold: null
...
sequence:
commands:
- description: Call the job that does the work
jobref:
args: -do_the_job yes -fail_a_lot no
group: jobs/common
name: do_the_work
If I remove the notification settings from the common job, the error in the log goes away. I'm not sure if sending notifications from an included job is not supported. It would be useful to me if it was, so I could place notification settings in a single location. However, I can understand why it presents a problem for the scheduler/executor.

Unable to view the contents of file in ansible managed nodes

I'm trying to view the contents of a file from managed node and control node, here I see the syntax works fine for localhost (172.17.254.200) but not for the remote hosts. Below is the task I have written using lookup / query plugin, can you please suggest the fix:
---
- name: Report Test
hosts: all
roles:
- patching
tasks:
- name: Display the Pre and Post check Differences
debug:
msg: "{{ query('file', '/tmp/check/{{ inventory_hostname }}_Comparison') }}"
Below is the output
TASK [patching : Display the Pre and Post check Differences] ***********************************************************************************************************
ok: [172.17.254.200] =>
msg:
- |-
free_m - YES
sysctl_all - YES
uptime - YES
[WARNING]: Unable to find '/tmp/check/172.17.254.207_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.207]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.207_Comparison'
[WARNING]: Unable to find '/tmp/check/172.17.254.208_Comparison' in expected paths (use -vvvvv to see paths)
fatal: [172.17.254.208]: FAILED! =>
msg: 'An unhandled exception occurred while running the lookup plugin ''file''. Error was a <class ''ansible.errors.AnsibleError''>, original message: could not locate file in lookup: /tmp/check/172.17.254.208_Comparison'
Lookups are executed on Ansible controller (as pointed out by #Vladimir Botka). If you just want to view the contents of a file on remote hosts, you can cat the file through ansible and debug the stdout_lines.
- command: "cat /tmp/check/{{ inventory_hostname }}_Comparison"
register: file_cat
changed_when: false
- debug:
var: file_cat.stdout_lines
lookup and query "execute and are evaluated on the Ansible control machine."
Use slurp. Quoting:
This module returns an ‘in memory’ base64 encoded version of the file, take into account that this will require at least twice the RAM as the original file size.
For larger files use fetch. Quoting:
It is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

Visualize Jobber tasks on ELK (via Filebeat)

A Jobber Docker container (running periodic tasks) outputs on stdout, which is captured by Filebeat (with Docker containers autodiscovery flag on) and then sent to Logstash (within an ELK stack) or to Elasticsearch directly.
Now on Kibana, the document looks as such:
#timestamp Jan 20, 2020 # 20:15:07.752
...
agent.type filebeat
container.image.name jobber_jobber
...
message {
"job": {
"command":"curl http://my.service/run","name":"myperiodictask",
"status":"Good",
"time":"0 */5 * * * *"
},
"startTime":1579540500,
"stdout":"{\"startDate\":\"2020-01-20T16:35:00.000Z\",\"endDate\":\"2020-01-20T17:00:00.000Z\",\"zipped\":true,\"size\":3397}",
"succeeded":true,
"user":"jobberuser",
"version":"1.4"
}
...
Note: above 'message' field is a simple string reflecting a json object; above displayed format is for clearer readability.
My goal is to be able to request Elastic on the message fields, so I can filter by Jobber tasks for instance.
How can I make that happen ?
I know Filebeat uses plugins and the container tags to apply this or that filter: are there any for Jobber? If not, how to do this?
Even better would be to be able to exploit the fields of the Jobber task result (under the 'stdout' field)! Could you please direct me to ways to implement that?
Filebeat provides processors to handle such tasks.
Below's a configuration to handle the needs "Decode the json in the 'message' field", "Decode the json in the 'stdout' within" (both using the decode_json_fields processor), and other Jobber-related needs.
Note that given example filter the events going through Filebeat by a 'custom-tag' label given to the Docker container hosting the Jobber process. The docker.container.labels.custom-tag: jobber condition should be replaced according to your usecase.
filebeat.yml:
processors:
# === Jobber events processing ===
- if:
equals:
docker.container.labels.custom-tag: jobber
then:
# Drop Jobber events which are not job results
- drop_event:
when:
not:
regexp:
message: "{.*"
# Json-decode event's message part
- decode_json_fields:
when:
regexp:
message: "{.*"
fields: ["message"]
target: "jobbertask"
# Json-decode message's stdout part
- decode_json_fields:
when:
has_fields: ["jobbertask.stdout"]
fields: ["jobbertask.stdout"]
target: "jobbertask.result"
# Drop event's decoded fields
- drop_fields:
fields: ["message"]
- drop_fields:
when:
has_fields: ["jobbertask.stdout"]
fields: ["jobbertask.stdout"]
The decoded fields are placed in the "jobbertask" field. This is to avoid index-mapping collision on the root fields. Feel free to replace "jobbertask" by any other field name, keeping care of mapping collisions.
In my case, this works whether Filebeat addresses the events to Logstash or to Elasticsearch directly.

How to deploy symfony2 - my dev env works but not prod

I have read the cookbook regarding deploying my symfony2 app to production environment. I find that it works great in dev mode, but the prod mode first wouldn't allow signing in (said bad credentials though I signed in with those very credentials in dev mode), and later after an extra run of clearing and warming up the prod cache, I just get http500 from my prod route.
I had a look in the config files and wonder if this has anything to do with it:
config_dev.php:
imports:
- { resource: config.yml }
framework:
router: { resource: "%kernel.root_dir%/config/routing_dev.yml" }
profiler: { only_exceptions: false }
web_profiler:
toolbar: true
intercept_redirects: false
monolog:
handlers:
main:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
firephp:
type: firephp
level: info
assetic:
use_controller: true
config_prod:
imports:
- { resource: config.yml }
#doctrine:
# orm:
# metadata_cache_driver: apc
# result_cache_driver: apc
# query_cache_driver: apc
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
nested:
type: stream
path: %kernel.logs_dir%/%kernel.environment%.log
level: debug
I also noticed that there is a routing_dev.php but no routing_prod, the prod encironment works great however on my localhost so... ?
In your production environment when you run the app/console cache:warmup command you need to make sure you run it like this: app/console cache:warmup --env=prod --no-debug Also, remember that the command will warmup the cache as the current user, so all files will be owned by the current user and not the web server user (eg: www-data). That is probably why you get a 500 server error. After you warmup the cache run this: chown -R www-data.www-data app/cache/prod (be sure to replace www-data with your web server user.
Make sure your parameters.ini file has any proper configs in place since its common for this file to not be checked in to whatever code repository you might be using. Or (and I've even done this) its possible to simply forget to put parameters from dev into the prod parmeters.ini file.
You'll also need to look in your app/logs/prod.log to see what happens when you attempt to login.