Multi-element output from step in Github Actions - github

I want to create a step in the job which will output multiple file names which then could be iterated in another step. Here is my test workflow:
name: test-workflow
on:
push:
branches: [ master ]
jobs:
test-job:
runs-on: ubuntu-latest
steps:
- name: Checkout this repo
uses: actions/checkout#v2
with:
fetch-depth: 2
- name: Test1
id: test1
run: |
for f in $(ls $GITHUB_WORKSPACE/.github/workflows); do
echo "file: $f"
echo "::set-output name=f::$f"
done
- name: Test2
run: |
for file in "${{ steps.test1.outputs.f }}"; do
echo "$file detected"
done
However, given $GITHUB_WORKSPACE/.github/workflows really contains multiple files (all committed to repo), step Test2 prints out only last file name listed in the step Test1 by ls.
How can I set output f from the step Test1 to multiple values?

In your case you ovrwrite output. Please try to pass an array as output:
name: test-workflow
on:
push:
branches: [ master ]
workflow_dispatch:
jobs:
test-job:
runs-on: ubuntu-latest
steps:
- name: Checkout this repo
uses: actions/checkout#v2
with:
fetch-depth: 2
- name: Test1
id: test1
run: |
h=""
for g in $(ls $GITHUB_WORKSPACE/.github/workflows); do
echo "file: $g"
h="${h} $g"
done
echo "::set-output name=h::$h"
- name: Test2
run: |
for file in ${{ steps.test1.outputs.h }}; do
echo "$file.. detected"
done

Related

Incrementing Variables in Yaml pipeline

I am trying to increment a variable value by 1. Below is my code
variables:
a: 0
steps:
script: |
if [ $a == 0 ]; then
echo $a
$a=$a+1
fi
echo $a
but its not incrementing
I tried many types of incrementing formats, below are the list i tried,
((a+1))
a=$((a+=1))
let "a=a+1"
a=$((a + 1))
$[counter('$(a),1)]
a: $[counter(1)]
none of the above format is incrementing my variable.
Check the documentation: Define variables
In this case, your yaml may look like this:
variables:
a: 0
steps:
script: |
a = $(a)
if [ $a == 0 ]; then
echo $a
$a=$a+1
fi
echo $a
If you want to use an updated value in the next script task, you have to add
echo "##vso[task.setvariable variable=a;]"$a
Check Set variables in scripts

Stages not following conditions in Travis

My pipeline in Travis CI is executing update stage when I'm just pushing my code to the repository.
The expected behaviour when I'm pushing code to my repository is to execute the pipeline like follows:
check - stage1
but it's being executed like this:
check - stage1 - update
Also, when I'm forcing the pipeline (with T_UPDATE=true) to just run the update stage the execution is the same.
Any idea if I'm defining something wrong in stages?
This is my stage code in Travis
stages:
- name: check
- name: stage1
if: branch !~ /^master$/ env(T_TEST) !~ /^(?i)(true|1).*/ AND env(T_UPDATE)= !~ /^(?i)(true|1).*/ AND env(T_STAGE3) !~ /^(?i)(true|1).*/
- name: stage2
if: branch =~ /^master$/ OR env(T_TEST) !~ /^(?i)(true|1).*/ AND env(T_UPDATE) !~ /^(?i)(true|1).*/
- name: test
if: env(T_TEST) =~ /^(?i)(true|1).*/
- name: update
if: env(T_TEST) =~ /^(?i)(true|1).*/) OR env(T_UPDATE) =~ /^(?i)(true|1).*/
- name: stage3
if: env(T_STAGE3) =~ /^(?i)(true|1).*/

if the last 20 commit numbers are not in the current branch, the build(bitbucket pipeline) should fail

if the last 20 commit numbers are not in the current branch, the transaction should fail
my sh script.
#!/bin/sh
masterCommit=$(git rev-list -n 20 master)
currentBranch=$(git rev-parse --abbrev-ref HEAD)
currentCommit=$(git rev-list -n 100 "${currentBranch}")
echo "Current branch: ${currentBranch}"
echo "Current commit: ${currentCommit}"
echo "Master commit: ${masterCommit}"
for i in $masterCommit; do
for j in $currentCommit; do
if [ "$i" != "$j" ]; then
echo "BUİLD FAİl: $i"
exit 0
fi
done
done
but it didn't. How can I do it
I have to throw the commit numbers into an array and say it failed if it is not in it.

print the outcome of a step in github actions job

I'm trying to upload an artifact that logs the result of a mvn build. the code will explain better:
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
...
- name: mvn-build
continue-on-error: true
run: |
mvn package ...
# This doesn't work because on mvn fail - the step is terminated with an error signal > 0
STATUS=$?
if [ $STATUS -eq 0 ]; then
echo 1 > runs/log.txt
else
echo 0 > runs/log.txt
fi
# This part does create the file (upload-artifact#v1) but the with an empty content
- name: print-result
env:
OUTCOME: ${{ steps.mvn-build.outcome }}
run: |
echo "$OUTCOME" > runs/log.txt
The job terminates because a command exits with a nonzero code. Just don't run that command at top level and you'll be fine!
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
...
- name: mvn-build
continue-on-error: true
run: |
if mvn package ... ; then
echo 1 > runs/log.txt
else
echo 0 > runs/log.txt
fi
# This part does create the file (upload-artifact#v1) but the with an empty content
- name: print-result
env:
OUTCOME: ${{ steps.mvn-build.outcome }}
run: |
echo "$OUTCOME" > runs/log.txt
more information on this bash behavior here: https://unix.stackexchange.com/a/22728/178425

Ansible: Iterate through captured command output

I am trying to convert an existing Perl script to Ansible role. I facing trouble in iterating over a captured command output.
Here is the Perl Script:
# Description: This script will adjust the oom score of all the important system processes to a negative value so that OOM killer does not touch these processes ############
chomp(my $OS = `uname`);
if($OS eq "Linux")
{
my #file = `ps -ef|egrep 'sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm' |egrep -ve 'ps|egrep' |awk '{print \$8,\$2}'`;
chomp(#file);
foreach my $element (#file)
{
chomp($element);
(my $process, my $pid) = (split(/\s/,$element))[0,1];
print "($process)($pid)\n";
system("echo -17 > /proc/$pid/oom_adj");
system("cat /proc/$pid/oom_adj");
}
}
else
{
print "The host is a $OS system, so no action taken\n";
}
Here is what I have tried so far in Ansible:
---
- name: Capture uname ouput
shell: "uname"
register: os_type
- name: Adjust OOM to negative so that OOM killer does not kill below processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$8,\$2}"'
register: oom
when: os_type.stdout == 'Linux'
- debug: var=oom.stdout_lines
Now, I want to iterate over var and implement this part in Ansible:
foreach my $element (#file)
{
chomp($element);
(my $process, my $pid) = (split(/\s/,$element))[0,1];
print "($process)($pid)\n";
system("echo -17 > /proc/$pid/oom_adj");
system("cat /proc/$pid/oom_adj");
}
Please help.
below worked for me
- hosts: temp
gather_facts: yes
remote_user: root
tasks:
- name: Adjust OOM to negative so that OOM killer does not kill below processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$2}"'
register: oom
when: ansible_system == 'Linux'
- debug: var=oom.stdout
- name: update the pid
raw: echo -17 > /proc/{{ item }}/oom_adj
loop: "{{ oom.stdout_lines }}"
I was able to figure this out. Below is the solution that worked for me. Thanks to everyone who tried to help me out. Appreciate it :)
---
- name: Capture uname ouput
shell: "uname"
register: os_type
- name: Gather important processes
shell: 'ps -ef|egrep "sssd|wdmd|portreserve|autofs|automount|ypbind|rpcbind|rpc.statd|rpc.mountd|rpc.idampd|ntpd|lmgrd|Xvnc|vncconfig|irqblance|rpc.rquotad|metric|nscd|crond|snpslmd|getpwname.pl|mysqld|rsyslogd|xinetd|sendmail|lsf|tigervnc|tightvnc|cfadm" |egrep -ve "ps|egrep" |awk "{print \$8,\$2}"'
register: oom
when: os_type.stdout == 'Linux'
- name: Adjust OOM to negative so that OOM killer does not kill important processes
shell: "echo -17 >> /proc/{{ item.split()[1] }}/oom_adj"
loop: "{{ oom.stdout_lines }}"
register: echo
- set_fact:
stdout_lines: []
- set_fact:
stdout_lines: "{{ stdout_lines + item.stdout_lines }}"
with_items: "{{ echo.results }}"
- debug:
msg: "This is a stdout line: {{ item }}"
with_items: "{{ stdout_lines }}"