print the outcome of a step in github actions job - github

I'm trying to upload an artifact that logs the result of a mvn build. the code will explain better:
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
...
- name: mvn-build
continue-on-error: true
run: |
mvn package ...
# This doesn't work because on mvn fail - the step is terminated with an error signal > 0
STATUS=$?
if [ $STATUS -eq 0 ]; then
echo 1 > runs/log.txt
else
echo 0 > runs/log.txt
fi
# This part does create the file (upload-artifact#v1) but the with an empty content
- name: print-result
env:
OUTCOME: ${{ steps.mvn-build.outcome }}
run: |
echo "$OUTCOME" > runs/log.txt

The job terminates because a command exits with a nonzero code. Just don't run that command at top level and you'll be fine!
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
...
- name: mvn-build
continue-on-error: true
run: |
if mvn package ... ; then
echo 1 > runs/log.txt
else
echo 0 > runs/log.txt
fi
# This part does create the file (upload-artifact#v1) but the with an empty content
- name: print-result
env:
OUTCOME: ${{ steps.mvn-build.outcome }}
run: |
echo "$OUTCOME" > runs/log.txt
more information on this bash behavior here: https://unix.stackexchange.com/a/22728/178425

Related

Stages not following conditions in Travis

My pipeline in Travis CI is executing update stage when I'm just pushing my code to the repository.
The expected behaviour when I'm pushing code to my repository is to execute the pipeline like follows:
check - stage1
but it's being executed like this:
check - stage1 - update
Also, when I'm forcing the pipeline (with T_UPDATE=true) to just run the update stage the execution is the same.
Any idea if I'm defining something wrong in stages?
This is my stage code in Travis
stages:
- name: check
- name: stage1
if: branch !~ /^master$/ env(T_TEST) !~ /^(?i)(true|1).*/ AND env(T_UPDATE)= !~ /^(?i)(true|1).*/ AND env(T_STAGE3) !~ /^(?i)(true|1).*/
- name: stage2
if: branch =~ /^master$/ OR env(T_TEST) !~ /^(?i)(true|1).*/ AND env(T_UPDATE) !~ /^(?i)(true|1).*/
- name: test
if: env(T_TEST) =~ /^(?i)(true|1).*/
- name: update
if: env(T_TEST) =~ /^(?i)(true|1).*/) OR env(T_UPDATE) =~ /^(?i)(true|1).*/
- name: stage3
if: env(T_STAGE3) =~ /^(?i)(true|1).*/

Change ports of containers for airflow in docker-compose.yaml

I'm currently using the exemple "docker-compose.yaml" file found on this github. I want to change the default running port of each container (redis, webserver, postgres, flower). To do so, I've created a .env file which contains the port that is loaded inside the .yaml.
Here is my new port configuration (.env) :
AIRFLOW_REDIS_PORT = 8904
AIRFLOW_WEBSERVER_PORT = 8905
AIRFLOW_POSTGRES_PORT = 8906
AIRFLOW_FLOWER_PORT = 8907
I have edited source code of the "docker-compose.yaml" file aswell to change those ports (you can find my file modified below). The problem is that the following container : flower, scheduler, worker are not able to connect to redis it seems (it work fine without touching the port number).
Here is the logs of the problem :
BACKEND=redis
DB_HOST=redis
DB_PORT=8904
....................
ERROR! Maximum number of retries (20) reached.
Last check result:
$ run_nc 'redis' '8904'
(UNKNOWN) [172.27.0.2] 8904 (?) : Connection refused
sent 0, rcvd 0
Here is the docker-compose.yaml:
---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:latest-python3.8}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres-airflow/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow#postgres-airflow/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:#redis:${AIRFLOW_REDIS_PORT}/0
# AIRFLOW__CELERY__BROKER_URL: redis://:#redis:6379/0
AIRFLOW__CELERY__FLOWER_PORT: ${AIRFLOW_FLOWER_PORT}
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:0"
depends_on:
&airflow-common-depends-on
redis:
condition: service_healthy
postgres-airflow:
condition: service_healthy
services:
postgres-airflow:
container_name: postgres-airflow-container
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
container_name: redis-airflow-container
image: redis:6.2.6
expose:
# - 6379
- ${AIRFLOW_REDIS_PORT}
# - ${AIRFLOW_REDIS_PORT}
# environment:
# REDIS_HOST: redis
# REDIS_PORT: ${AIRFLOW_REDIS_PORT}
ports:
- ${AIRFLOW_REDIS_PORT}:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
container_name: webserver-airflow-container
command: webserver
ports:
- ${AIRFLOW_WEBSERVER_PORT}:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:${AIRFLOW_WEBSERVER_PORT}/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-scheduler:
<<: *airflow-common
container_name: scheduler-airflow-container
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-worker:
<<: *airflow-common
container_name: worker-airflow-container
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery#$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
environment:
<<: *airflow-common-env
# Required to handle warm shutdown of the celery workers properly
# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation
DUMB_INIT_SETSID: "0"
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-triggerer:
<<: *airflow-common
container_name: triggerer-airflow-container
command: triggerer
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type TriggererJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
airflow-init:
<<: *airflow-common
container_name: init-airflow-container
entrypoint: /bin/bash
# yamllint disable rule:line-length
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.2.0
min_airflow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airflow_version_comparable )); then
echo
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
echo
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: AIRFLOW_UID not set!\e[0m"
echo "If you are on Linux, you SHOULD follow the instructions below to set "
echo "AIRFLOW_UID environment variable, otherwise files will be owned by root."
echo "For other operating systems you can get rid of the warning with manually created .env file:"
echo " See: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#setting-the-right-airflow-user"
echo
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
echo
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
echo
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
echo
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
echo
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:0" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
# yamllint enable rule:line-length
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
user: "0:0"
volumes:
- .:/sources
airflow-cli:
<<: *airflow-common
profiles:
- debug
environment:
<<: *airflow-common-env
CONNECTION_CHECK_MAX_COUNT: "0"
# Workaround for entrypoint issue. See: https://github.com/apache/airflow/issues/16252
command:
- bash
- -c
- airflow
flower:
<<: *airflow-common
container_name: flower-airflow-container
command: celery flower
ports:
- ${AIRFLOW_FLOWER_PORT}:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
depends_on:
<<: *airflow-common-depends-on
airflow-init:
condition: service_completed_successfully
volumes:
postgres-db-volume:
Can someone help me on how to correctly change the port number of each service to make it works please ?
Thank's a lot

Airflow KubernetesExecutor and minikube: Scheduler can't connect to Minikube

I have a MiniKube that is running and I deploy Airflow via docker-compose this way:
---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.3}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: KubernetesExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#postgres/airflow
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
# AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ~/.kube:/home/airflow/.kube
- ./dags/:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-0}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.1.0
min_airlfow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airlfow_version_comparable )); then
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo -e "\033[1;31mERROR!!!: AIRFLOW_UID not set!\e[0m"
echo "Please follow these instructions to set AIRFLOW_UID and AIRFLOW_GID environment variables:
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#initializing-environment"
exit 1
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:${AIRFLOW_GID}" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
user: "0:${AIRFLOW_GID:-0}"
volumes:
- .:/sources
volumes:
postgres-db-volume:
But the connection between Airflow and Kubernetes seems to fail (removing the AIRFLOW__CORE__EXECUTOR varenv allows the creation):
airflow-scheduler_1 | Traceback (most recent call last):
airflow-scheduler_1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-scheduler_1 | sys.exit(main())
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
airflow-scheduler_1 | args.func(args)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
airflow-scheduler_1 | return func(*args, **kwargs)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 91, in wrapper
airflow-scheduler_1 | return f(*args, **kwargs)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/scheduler_command.py", line 70, in scheduler
airflow-scheduler_1 | job.run()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 245, in run
airflow-scheduler_1 | self._execute()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 686, in _execute
airflow-scheduler_1 | self.executor.start()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 485, in start
airflow-scheduler_1 | self.kube_client = get_kube_client()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/kubernetes/kube_client.py", line 145, in get_kube_client
airflow-scheduler_1 | client_conf = _get_kube_config(in_cluster, cluster_context, config_file)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/kubernetes/kube_client.py", line 40, in _get_kube_config
airflow-scheduler_1 | config.load_incluster_config()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
airflow-scheduler_1 | InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 45, in load_and_set
airflow-scheduler_1 | self._load_config()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 51, in _load_config
airflow-scheduler_1 | raise ConfigException("Service host/port is not set.")
airflow-scheduler_1 | kubernetes.config.config_exception.ConfigException: Service host/port is not set.
My Idea is that the kube config file is not correctly found by the Airflow Scheduler. I mounted the volume ~/.kube:/home/airflow/.kube but can't find a way to make it work.
Using Docker Compose to run KubernetesExecutor seems like a bad idea.
Why would you want to do it?
It makes a lot more sense to use the official Helm Chart - it's easier to manage and configure, you can easily deploy it to your minikube and it will work out-of-the-box with KubernetesExecutor.
https://airflow.apache.org/docs/helm-chart/stable/index.html

Multi-element output from step in Github Actions

I want to create a step in the job which will output multiple file names which then could be iterated in another step. Here is my test workflow:
name: test-workflow
on:
push:
branches: [ master ]
jobs:
test-job:
runs-on: ubuntu-latest
steps:
- name: Checkout this repo
uses: actions/checkout#v2
with:
fetch-depth: 2
- name: Test1
id: test1
run: |
for f in $(ls $GITHUB_WORKSPACE/.github/workflows); do
echo "file: $f"
echo "::set-output name=f::$f"
done
- name: Test2
run: |
for file in "${{ steps.test1.outputs.f }}"; do
echo "$file detected"
done
However, given $GITHUB_WORKSPACE/.github/workflows really contains multiple files (all committed to repo), step Test2 prints out only last file name listed in the step Test1 by ls.
How can I set output f from the step Test1 to multiple values?
In your case you ovrwrite output. Please try to pass an array as output:
name: test-workflow
on:
push:
branches: [ master ]
workflow_dispatch:
jobs:
test-job:
runs-on: ubuntu-latest
steps:
- name: Checkout this repo
uses: actions/checkout#v2
with:
fetch-depth: 2
- name: Test1
id: test1
run: |
h=""
for g in $(ls $GITHUB_WORKSPACE/.github/workflows); do
echo "file: $g"
h="${h} $g"
done
echo "::set-output name=h::$h"
- name: Test2
run: |
for file in ${{ steps.test1.outputs.h }}; do
echo "$file.. detected"
done

Ansible: Iterate through captured command output

I am trying to convert an existing Perl script to Ansible role. I facing trouble in iterating over a captured command output.
Here is the Perl Script:
# Description: This script will adjust the oom score of all the important system processes to a negative value so that OOM killer does not touch these processes ############
chomp(my $OS = `uname`);
if($OS eq "Linux")
{
my #file = `ps -ef|egrep 'sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm' |egrep -ve 'ps|egrep' |awk '{print \$8,\$2}'`;
chomp(#file);
foreach my $element (#file)
{
chomp($element);
(my $process, my $pid) = (split(/\s/,$element))[0,1];
print "($process)($pid)\n";
system("echo -17 > /proc/$pid/oom_adj");
system("cat /proc/$pid/oom_adj");
}
}
else
{
print "The host is a $OS system, so no action taken\n";
}
Here is what I have tried so far in Ansible:
---
- name: Capture uname ouput
shell: "uname"
register: os_type
- name: Adjust OOM to negative so that OOM killer does not kill below processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$8,\$2}"'
register: oom
when: os_type.stdout == 'Linux'
- debug: var=oom.stdout_lines
Now, I want to iterate over var and implement this part in Ansible:
foreach my $element (#file)
{
chomp($element);
(my $process, my $pid) = (split(/\s/,$element))[0,1];
print "($process)($pid)\n";
system("echo -17 > /proc/$pid/oom_adj");
system("cat /proc/$pid/oom_adj");
}
Please help.
below worked for me
- hosts: temp
gather_facts: yes
remote_user: root
tasks:
- name: Adjust OOM to negative so that OOM killer does not kill below processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$2}"'
register: oom
when: ansible_system == 'Linux'
- debug: var=oom.stdout
- name: update the pid
raw: echo -17 > /proc/{{ item }}/oom_adj
loop: "{{ oom.stdout_lines }}"
I was able to figure this out. Below is the solution that worked for me. Thanks to everyone who tried to help me out. Appreciate it :)
---
- name: Capture uname ouput
shell: "uname"
register: os_type
- name: Gather important processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$8,\$2}"'
register: oom
when: os_type.stdout == 'Linux'
- name: Adjust OOM to negative so that OOM killer does not kill important processes
shell: "echo -17 >> /proc/{{ item.split()[1] }}/oom_adj"
loop: "{{ oom.stdout_lines }}"
register: echo
- set_fact:
stdout_lines: []
- set_fact:
stdout_lines: "{{ stdout_lines + item.stdout_lines }}"
with_items: "{{ echo.results }}"
- debug:
msg: "This is a stdout line: {{ item }}"
with_items: "{{ stdout_lines }}"