there is a way to compare between the current time using ansible_date_time
with another date (aws ec2 launch time) So I will be able to know if it happened in the last hour?
I have the following var ec2_launch_time: 2018-01-04T15:57:52+00:00
I know that I can compare days with the following method (from here and here):
"{{ (ansible_date_time.iso8601[:19] | to_datetime('%Y-%m-%dT%H:%M:%S') - ec2_launch_time[:19] | to_datetime('%Y-%m-%dT%H:%M:%S')).days }}"
if I will get 0 I will know that it happened on the same day but I'm looking for a way to get true or false if the difference between the dates are 1 hour (on the same day).
there is a filter for this or elegant way to write this or only using shell (like this)?
By subtracting two dates, you get a timedelta Python object.
You can use total_seconds method to get the exact difference in seconds. Add some basic arithmetic to get the number of full hours:
---
- hosts: localhost
gather_facts: no
connection: local
vars:
date_later: 2018-01-05 08:30:00
date_earlier: 2018-01-05 06:50:00
tasks:
- debug:
msg: "{{ ( (date_later - date_earlier).total_seconds() / 3600 ) | int }}"
returns:
ok: [localhost] => {
"msg": "1"
}
Use the above expression in the condition you want.
Related
I am trying to quickly find all folders named in a yyyymmdd_hhmmss format between two dates and times. These dates and times are variables set on user input.
E.g., all folders between
20221231_120000
20230101_235920
All dates/times looked for being valid is not a requirement for me.
Note that the 'age' of the folders does not match their names.
I have looked at regex but it seems like a complex solution for variable dates/times.
I have looked at Ansible find module patterns but they are incredibly slow, because it runs the find command for every sequential number. Taking about 1 second per checked number.
For example:
- name: Find folders matching dates and times
vars:
startdate: "20230209"
enddate: "20230209"
starttime: "120000"
endtime: "130000"
ansible.builtin.find:
paths:
- "/folderstocheck/
file_type: directory
patterns: "{{ item[0:8] }}_{{item[8:-1]}}"
with_sequence: start={{ startdate + starttime }} end={{ enddate + endtime }}
register: found_files
Takes approximately 167 minutes to run
Regarding
Note that the 'age' of the folders does not match their names.
I like to recommend to streamline the folder access and modification times with the names so that simple OS functions or Ansible modules like stat could come in place. Such will make any processing a lot easier.
How to do that? I have a somehow similar use case of Change creation time of files (RPM) from download time to build time which shows the idea and how one could achieve that.
Given some test directories as input
:~/test$ tree 202*
20221231_110000
20221231_120000
20221231_130000
20221232_000000
20230000_000000
20230101_000000
20230101_010000
20230101_020000
20230101_030000
20230101_120000
20230101_130000
a minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
vars:
FROM: "20221231_120000"
TO: "20230101_120000"
tasks:
- name: Get an unordered list of directories with pattern 'yyyymmdd_hhmmss'
find:
path: "/home/{{ ansible_user }}/test/"
file_type: directory
use_regex: true
patterns: "^[1-2]{1}[0-9]{7}_[0-9]{6}" # can be more specified
register: result
- name: Order list
set_fact:
dir_list: "{{ result.files | map(attribute='path') | map('basename') | community.general.version_sort }}"
- name: Show directories between
debug:
msg: "{{ item }}"
when: item is version(FROM, '>=') and item is version(TO, '<=') # means between
loop: "{{ dir_list }}"
will result into an output of
TASK [Get a unordered list of directories with pattern 'yyyymmdd_hhmmss'] ******
ok: [localhost]
TASK [Order list] **************************
ok: [localhost]
TASK [Show directories between] ************
ok: [localhost] => (item=20221231_120000) =>
msg: '20221231_120000'
ok: [localhost] => (item=20221231_130000) =>
msg: '20221231_130000'
ok: [localhost] => (item=20221232_000000) =>
msg: '20221232_000000'
ok: [localhost] => (item=20230000_000000) =>
msg: '20230000_000000'
ok: [localhost] => (item=20230101_000000) =>
msg: '20230101_000000'
ok: [localhost] => (item=20230101_010000) =>
msg: '20230101_010000'
ok: [localhost] => (item=20230101_020000) =>
msg: '20230101_020000'
ok: [localhost] => (item=20230101_030000) =>
msg: '20230101_030000'
ok: [localhost] => (item=20230101_120000) =>
msg: '20230101_120000'
Some measurement
Get an unordered list of directories with pattern 'yyyymmdd_hhmmss' -- 0.50s
Show directories between --------------------------------------------- 0.24s
Order list ----------------------------------------------------------- 0.09s
According the given initial description there is no timezone and daylight saving time involved. So this is working because the given pattern is just a kind of incrementing number, even if a human may interpret it as date. It could even be simplified if more information regarding the hour is provided. Means, if it is every time 1200 that insignificant part could be dropped and leaving one with a simple integer number. The same would be true for the delimiter _.
Regarding
... they are incredibly slow, because it runs the find command for every sequential number ... with_sequence ...
that is not necessary and seems for me like the case of How do I optimize performance of Ansible playbook with regards to SSH connections?
Looping over commands and providing one parameter for the command per run results into a lot of overhead and multiple SSH connections as well, providing the list directly to the command might be possible and increase performance and decrease runtime and resource consumption.
Further processing can be done just afterwards.
Trying to set up a cron on azure devops pipeline but I am getting this error message. I looked at the documentation but not sure what is not in line with the doc. Could someone let me know what is wrong with the my cron syntax? Thank you.
Error while validating cron input. Improperly formed cron syntax: '0 21 * * 1-7'.
Here is the entire yml file.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
schedules:
- cron: "0 21 * * 1-7"
displayName: "pipeline cron test"
branches:
include:
- master
always: true
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: Run a one-line script, changed
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
echo more info
displayName: 'Run a multi-line script'
Here's the relevant part of the doc.
Building Cron Syntax
Each cron syntax consists of 5 values separated by Space character:
1
2
3
4
5
6
mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes
We can use following table to create understand syntax:
Syntax Meaning Accepted Values
mm Minutes 0 to 59
DD Hours 0 to 23
MM Months 1 through 12, full English names, first three letters of English names
DW Days of the Week 0 through 6 (starting with Sunday), full English names, first three letters of English names
Values can be provided in following formats:
Format Example Description
Wildcard * Matches all values for this field
Single value 5 Specifies a single value for this field
Comma delimited 3,5,6 Specifies multiple values for this field. Multiple formats can be combined, like 1,3-6
Ranges 1-3 The inclusive range of values for this field
Intervals */4 or 1-5/2 Intervals to match for this field, such as every 4th value or the range 1-5 with a step interval of 2
You should specify your CRON syntax like the following: "0 21 * * *" or "0 21 * * 0-6", if you want to trigger it for all days of the week.
Days of week: 0 through 6 (starting with Sunday), full English names,
first three letters of English names
As the title says when I'd like to be able to pass a variable that is registered under one host group to another, but I'm not sure how to do that and I couldn't find anything relevant under the variable documentation http://docs.ansible.com/ansible/playbooks_variables.html
This is a simplified example of what I am trying to see. I have a playbook that calls many different groups and checks where a symlink points. I'd like to be able to report all of the symlink targets to console at the end of the play.
The problem is the registered value is only valid under the host group that it was defined in. Is there a proper way of exporting these variables?
---
- hosts: max_logger
tasks:
- shell: ls -la /home/ubuntu/apps/max-logger/active | awk -F':' '{print $NF}'
register: max_logger_old_active
- hosts: max_data
tasks:
- shell: ls -la /home/ubuntu/apps/max-data/active | awk -F':' '{print $NF}'
register: max_data_old_active
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ max_logger_old_active.stdout }}
The old max_data build is {{ max_data_old_active.stdout }}"
You don't need to pass anything here (you just need to access). Registered variables are stored as host facts and they are stored in memory for the time the whole playbook is run, so you can access them from all subsequent plays.
This can be achieved using magic variable hostvars.
You need however to refer to a host name, which doesn't necessarily match the host group name (e.g. max_logger) which you posted in the question:
- hosts: "localhost"
tasks:
- debug: >
msg="The old max_logger build is {{ hostvars['max_logger_host'].max_logger_old_active.stdout }}
The old max_data build is {{ hostvars['max_data_host'].max_data_old_active.stdout }}"
You can also write hostvars['max_data_host']['max_data_old_active']['stdout'].
Is it possible to do the same thing in Saltsack, but by embedded functionality (without powershell workaround)?
installation:
cmd.run:
- name: ./installation_script
wait for installation:
cmd.run:
- name: powershell -command "Start-Sleep 10"
- unless: powershell -command "Test-Path #('/path/to/file/to/appear')"
Unfortunately there is not a better way to do this in the current version of salt. But there was retry logic added into states in the next release Nitrogen.
The way I would do this in that release is.
installation:
cmd.run:
- name: ./installation_script
wait for installation:
cmd.run:
- name: Test-Path #('/path/to/file/to/appear')
- retry:
- attempts: 15
- interval: 10
- until: True
- shell: powershell
And this will continue to run the Test-Path until it exits with a 0 exit code (or whatever the equivalent is in powershell)
https://docs.saltstack.com/en/develop/ref/states/requisites.html#retrying-states
Daniel
NB: While using retry, pay attention to the indent, it has to be 4 spaces from retry key to form a dictionary for salt. Otherwise it will default to 2 attempts with 30s interval. (2017.7.0.)
wait_for_file:
file.exists:
- name: /path/to/file
- retry:
attempts: 15
interval: 30
Since the date in BusyBox is not as powerful as gnu date, I have problems to calculate the date of last saturday.
last_sat=`date +"%Y-%m-%d" -d "last saturday"`
only works fine with gnu date.
I've found something like this to calculate from Epoch
busybox date -D '%s' -d "$(( `busybox date +%s`+3*60 ))"
but my BusyBox (v1.1.0) doesn't recognize the -D argument.
Any suggestions?
For the last Saturday before today, under busybox 1.16:
date -d "UTC 1970-01-01 $(date +"%s - 86400 - %w * 86400"|xargs expr) secs"
How it works: take the current date in seconds, subtract one day, subtract one day times the number of the current weekday, then convert those seconds back to a date.
EDIT: after hacking together a build of 1.1, this works:
date -d "1970.01.01-00:00:$(date +"%s - 86400 - %w * 86400"|xargs expr)"
This working version is based on code-reading:
} else if (t = *tm_time, sscanf(t_string, "%d.%d.%d-%d:%d:%d", &t.tm_year,
&t.tm_mon, &t.tm_mday,
&t.tm_hour, &t.tm_min,
&t.tm_sec) == 6) {
t.tm_year -= 1900; /* Adjust years */
t.tm_mon -= 1; /* Adjust dates from 1-12 to 0-11 */
BusyBox's date command has been the topic of some discussion over the years. Apparently it doesn't always work as documented, and it doesn't always work the same as previous versions.
On a BB system I administer running BusyBox v1.01, I'm able to use the -d option with dates in the format MMDDhhmmYYYY.ss, and in no other format that I've tried. Luckily, output formats work as expected, presumably because date is using a proper strftime() according to comments in the source.
Here's my forward-and-reverse example:
[~] # busybox date '+%m%d%H%M%Y.%S'
090500152016.41
[~] # busybox date -d 090500152016.41
Mon Sep 5 00:15:41 EDT 2016
So .. what can we do with this? It seems that we can't do an arbitrary adjustment of seconds, as it only reads the first two digits:
[~] # busybox date -d 123119001969.65 '+%s'
65
[~] # busybox date -d 123119001969.100 '+%s'
10
Well, it turns out you can load the date fields with "invalid" numbers.
[~] # busybox date 090100002016
Thu Sep 1 00:00:00 EDT 2016
[~] # busybox date 093400002016
Wed Oct 4 00:00:00 EDT 2016
[~] # busybox date 09-200002016
Mon Aug 29 00:00:00 EDT 2016
So let's adjust the "day" field using something based on %w.
today=$(busybox date '+%m%d%H%M%Y')
last_sat=$(busybox date -d "${today:0:2}$( printf '%02d' $(( 10#${today:2:2} - 1 - $(busybox date '+%w') )) )${today:4}" '+%F')
This simply subtracts numbers in the second field (the 3rd and 4th characters of the date string). It obviously requires that your shell either be bash or understand bash-style math notation ($((...))). Math-wise, it should work as long as "last saturday" is within the same month, and it MAY work (I haven't tested it) with rollvers to the previous month (per the last test above).
Rather than jumping through these burning hoops, I recommend you just install a GNU date binary, and don't use busybox for this one binary. :-P
Good luck!