Sorting ansible output - sed

Below is my ansible task output.
TASK [debug] **************************************************************************************************** **************************************
ok: [server01] => {
"my_updates.stdout_lines": [
"",
"",
"Title : Definition Update for Windows Defender Antivirus - KB2267602 (Definition 1.297.486.0)",
"",
"",
""
]
}
ok: [server02] => {
"my_updates.stdout_lines": [
"",
"",
"Title : 2020-08 Cumulative Update for Windows Server 2016 for x64-based Systems",
"",
"",
""
]
}
I only want entries
Definition Update for Windows Defender Antivirus - KB2267602 (Definition 1.297.486.0)
2020-08 Cumulative Update for Windows Server 2016 for x64-based Systems
So i tried below method,
- name: Fetch Update List
shell: echo {{ my_updates.stdout_lines }} | tr -s ' ' | sed 's/[][]//g' | sed 's/u,//g' | sed 's/u //g' | sed 's/ u//g' | sed 's/),/)/g'
delegate_to: 127.0.0.1
register: my_sec_result
when: ansible_os_family == "Windows" and my_updates.stdout_lines | length |int > 0
- debug:
var: my_sec_result.stdout_lines
but this wont help, it gives me below output
TASK [debug] ******************************************************************************************************************************************
ok: [server01] => {
"my_sec_result.stdout_lines": [
"Title : Definition Update for Windows Defender Antivirus - KB2267602 (Definition 1.297.486.0) "
]
}
ok: [server02] => {
"my_sec_result.stdout_lines": [
"Title : 2020-08 Cumulative Update for Windows Server 2016 for x64-based Systems "
]
}
How to get only these Entries, At present only one entry each found on each server. There can be multiple entries.
Definition Update for Windows Defender Antivirus - KB2267602 (Definition 1.297.486.0)
2020-08 Cumulative Update for Windows Server 2016 for x64-based Systems

In your initial list, reject lines which are empty strings then remove the prepending "Title : " with the regexp_replace filter. The following task does it all in one:
- name: Display updates
debug:
msg: "{{ my_updates.stdout_lines | reject('eq', '') | map('regex_replace', 'Title : (.*)', '\\g<1>') | list }}"
Update: As pointed out by #Vladimir, in this case your can replace reject with select for a cleaner template string:
- name: Display updates
debug:
msg: "{{ my_updates.stdout_lines | select() | map('regex_replace', 'Title : (.*)', '\\g<1>') | list }}"

Related

In RunDeck from the bash script file manual error trigger adds multiple rows under the activities tab

In RunDeck in the bash script I am triggering a Linux error to cause job execution failure
CODE=`echo $RESULT | head -n1 | cut -c2`
if [ $CODE != '0' ];
then
exit 1
fi
Strange thing is that I am getting four records(failed) instead of one under the activity tab
Next I tried to add sleep 20 right above exit 1 and it logged one record only. As I see some processes still running in RunDeck after exit 1 command and it continues adding rows in activities. Any Idea how to wait until everything is finished to add only one row under the activity tab?
EDIT
Here is full definition
#!/bin/bash
set –e
URL="https://my_url"
MY_COMMAND="command_name"
RAW_DATA="[null,\"${MY_COMMAND}\",{}]"
echo "Raw Data ${RAW_DATA}"
RESULT=$(curl "$URL/${MY_COMMAND}" \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'User-Agent: my_user_agent' \
--data-raw $RAW_DATA)
echo $RESULT
CODE=`echo $RESULT | head -n1 | cut -c2`
if [ $CODE != '0' ];
then
exit 1
fi
EDIT 2 - Added Job Definition(YAML)
- defaultTab: output
description: JOB_DESCRIPTION
executionEnabled: true
id: 125cb755-3eaa-49f8-a8b8-dc1004238a44
loglevel: INFO
loglimit: 10MB
loglimitAction: truncate
loglimitStatus: failed
name: JOB_NAME
nodeFilterEditable: false
notification:
onfailure:
email:
attachLog: true
attachLogInFile: true
recipients: someone#mail.com
notifyAvgDurationThreshold: null
plugins:
ExecutionLifecycle: {}
retry: '3'
schedule:
dayofmonth:
day: '*'
month: '*'
time:
hour: '04'
minute: '05'
seconds: '0'
year: '*'
scheduleEnabled: true
schedules: []
sequence:
commands:
- script: "#!/bin/bash\n\n"
keepgoing: false
strategy: node-first
timeout: 1h
uuid: 125cb755-3eaa-49f8-a8b8-dc1004238a44
Your job is configured to retry 3 times in case of error (the original failed execution + 3 retries = 4 failed executions), you can change this behavior by editing your job > Go to the "Other" tab > Retry textbox. More info here.

Extracting specific value from stderr_lines

This is my ansible script
- name: show1
debug:
msg: "{{response.stderr_lines}}"
Here is the output
msg:
- Using endpoint [https://us-central1-aiplatform.googleapis.com/]
- CustomJob [projects/123456/locations/us-central1/customJobs/112233445566] is submitted successfully.
- ''
- Your job is still active. You may view the status of your job with the command
- ''
- ' $ gcloud ai custom-jobs describe projects/123456/locations/us-central1/customJobs/112233445566'
- ''
- or continue streaming the logs with the command
- ''
- ' $ gcloud ai custom-jobs stream-logs projects/123456/locations/us-central1/customJobs/112233445566'
Here I want to extract custom Job ID which is 112233445566
I used the select module like below
- name: show
debug:
msg: "{{train_custom_image_unmanaged_response.stderr_lines | select('search', 'describe') | list }}"
and it gives me this output
msg:
- ' $ gcloud ai custom-jobs describe projects/123456/locations/us-central1/customJobs/112233445566'
But I just want the job id as specified above. Any idea about that ?
Thanks.
You selected the line you are interested in. From that now you want to isolate the job id number in the end. You can do that using a regular expression like so:
- set_fact:
line: "{{train_custom_image_unmanaged_response.stderr_lines | select('search', 'describe') | list }}"
- debug:
msg: "{{ line | regex_search('.*/customJobs/(\\d+)', '\\1') }}"
This will give you all the digits in the end of the line after /customJobs/. See https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#searching-strings-with-regular-expressions

Ansible fact is undefined error in conditional

Below is the playbook
---
- name: stop agent process
shell: "ps -ef | grep -v grep | grep -w {{ MONGODB_AGENT_PROCESS }} | awk '{print $2}'"
register: running_agent_processes
- name: stop mongod process
shell: "ps -ef | grep -v grep | grep -w {{ MONGODB_SERVER_PROCESS }} | awk '{print $2}'"
register: running_mongod_processes
- name: combine processes
set_fact:
all_processes: "{{ running_agent_processes.stdout_lines + running_mongod_processes.stdout_lines }}"
- name: Kill all processes
shell: "kill {{ item }}"
with_items: "{{ all_processes }}"
when: ansible_facts[ansible_hostname] != primary
- wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ all_processes }}"
ignore_errors: yes
register: killed_processes
when: ansible_facts[ansible_hostname] != primary
- name: Force kill stuck processes
shell: "kill -9 {{ item }}"
with_items: "{{ killed_processes.results | select('failed') | map(attribute='item') | list }}"
when: ansible_facts[ansible_hostname] != primary
I have stored a fact called "primary" which stores the primary of a mongodb replica set in a previous step in the playbook.
I just want to compare the ansible_facts[ansible_hostname] with my primary fact. If they are not equal, I would like to kill processes.
The error I am getting is below:
fatal: [lpdkubpoc01d.phx.aexp.com]: FAILED! => {"msg": "The
conditional check 'ansible_facts[ansible_hostname] != primary' failed.
The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"} fatal: [lpdkubpoc01c.phx.aexp.com]: FAILED! =>
{"msg": "The conditional check 'ansible_facts[ansible_hostname] !=
primary' failed. The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"} fatal: [lpdkubpoc01e.phx.aexp.com]: FAILED! =>
{"msg": "The conditional check 'ansible_facts[ansible_hostname] !=
primary' failed. The error was: error while evaluating conditional
(ansible_facts[ansible_hostname] != primary): 'ansible_facts' is
undefined\n\nThe error appears to have been in
'/idn/home/sanupin/stop-enterprise-mongodb/tasks/stopAutomationAgent.yml':
line 11, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n
all_processes: "{{ running_agent_processes.stdout_lines +
running_mongod_processes.stdout_lines }}"\n- name: Kill all
processes\n ^ here\n"}
Can someone help me with comparing an ansible_fact with a set_fact fact?
You can comparing use directly ansible facts without write ansible_facts before. Just use as when: ansible_hostname != primary

How to combine multiple dict variables into an array based on regex of dict variable names

I have multiple dict variables in my inventories that start with 'my_var_*'. I would like to combine these into an array of dicts named 'my_var'
In my playbook, I'm using 'set_fact:' to create the 'my_var' variable by attempting to pull the matching variables from "hostvars['localhost']" with a select filter and match regex, but join only works on strings.
variables.yml
my_var_1:
element1: value11
element2: value12
my_var_2:
element1: value21
element2: value22
playbook.yml
- hosts: localhost
connection: local
gather_facts: False
tasks:
- set_fact:
my_var: "{{ hostvars['localhost'] | select('match', '^my_var_*') | join(', ' }}"
- debug:
msg: "{{ my_var }}"
is it possible to join these 'dict' variables into an 'array' like this?
my_var:
- element1: value11
element2: value12
- element1: value21
element2: value22
or possibly even
my_var:
- name: 1
element1: value11
element2: value12
- name: 2
element1: value21
element2: value22
You're very close, but as you point out, the join method on a string is for joining strings. You want to append lists, which you accomplish with the + operator.
There are also a few other issues:
The expression:
hostvars['localhost'] | select('match', '^my_var_*')
Will produce a list that looks like:
[
"my_var_1",
"my_var_2"
]
...which isn't what you want. You want the values of these variables, not the key names. We can use the dict2items filter and the selectattr filter to generate the data we want:
---
- hosts: localhost
gather_facts: false
tasks:
- name: set facts on localhost
set_fact:
my_var_1:
element1: value11
element2: value12
my_var_2:
element1: value21
element2: value22
- hosts: localhost
gather_facts: false
tasks:
- name: merge vars into my_var
set_fact:
my_var: "{{ hostvars['localhost']|dict2items|selectattr('key', 'match', '^my_var_')|map(attribute='value')|list }}"
- name: show content of my_var
debug:
var: my_var
This will produce the following output:
TASK [show content of my_var] ************************************************************************************
ok: [localhost] => {
"my_var": [
{
"element1": "value11",
"element2": "value12"
},
{
"element1": "value21",
"element2": "value22"
}
]
}
If you get rid of the map(attribute='value') filter, you get:
TASK [show content of my_var] *****************************************************************************************
ok: [localhost] => {
"my_var": [
{
"key": "my_var_1",
"value": {
"element1": "value11",
"element2": "value12"
}
},
{
"key": "my_var_2",
"value": {
"element1": "value21",
"element2": "value22"
}
}
]
}
This isn't exactly what you ask for as the second option, but it does include both the key name and values.
Additional notes:
In the above, I've used a separate play running set_fact to set the values of these variables, because this solution will only work if the variables are host vars (aka "facts") rather than global variables. You don't show in your question how you're setting these variables so I don't know if this will all work as written.
In a regular expression, * means "the preceding character zero or more times", so the expression ^my_var_* would match my_var, my_var_1, my_var______________, my_varfoo, and so forth. You can simply write ^my_var_ to select the variable names in which you're interested (this will select anything that begins with the text my_var_).

jsonb with psycopg2 RealDictCursor

I have a postgresql 9.4 (aka mongodb killer ;-) ) and this simple schema :
CREATE TABLE test (id SERIAL, name text, misc jsonb);
now i populate this, if i make a select it will show something like
id | name | misc
1 | user1 | { "age" : 23, "size" : "M" }
2 | user2 | { "age" : 30, "size" : "XL" }
now, if i make a request with psycopg2,
cur.execute("SELECT * FROM test;")
rows = list(cur)
i'll end up with
[ { 'id' : 1, 'name' : 'user1', 'misc' : '{ "age" : 23, "size" : "M" }' },
{ 'id2' : 2, 'name' : 'user2', 'misc' : '{ "age" : 30, "size" : "XL' }' }]
what's wrong you would tell me ? well misc is type str. i would expect it to be recognized as json and converted as Python dict.
from psycopg2 doc (psycopg2/extras page) it states that "Reading from the database, json values will be automatically converted to Python objects."
with RealDictCursor it seems that it is not the case.
it means that that i cannot access rows[0]['misc']['age'] as it would be convenient...
ok, i could do manually with
for r in rows:
r['misc'] = json.loads(r['misc'])
but if i can avoid that because there's a nicer solution...
ps.
someone with 1500+ rep could create the postgresql9.4 tag ;-)
Current psycopg version (2.5.3) doesn't know the oid for the jsonb type. In order to support it it's enough to call:
import psycopg2.extras
psycopg2.extras.register_json(oid=3802, array_oid=3807, globally=True)
once in your project.
You can find further information in this ML message.
Works out of the box with psycopg2 2.7.1 (not need to json.loads -- dictionaries are what come out of queries.)
sudo apt-get install libpq-dev
sudo pip install psycopg2 --upgrade
python -c "import psycopg2 ; print psycopg2.__version__"
2.7.1 (dt dec pq3 ext lo64)