Github action run two process one after other - github

I've two GitHub actions which should run one after other,
the first install 1 is installing & running a server (e.g. server is running on port 3000), this works however the install 1 is not finishing ( is the server is up and you dont get "stop" signal which is ok) but I need to proceed to the next step install 2 only when the server is up, how should I solve this issue?
In short when you running some process and you need to run other after a while
Please see this repo and the action.
- name: install 1
shell: bash
run: |
make install
make run
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
im using the kubebuilder to generate the project include the makefile...
https://github.com/kubernetes-sigs/kubebuilder

The two processes install 1 and install 2 are already executed one after the other by the implicit if: ${{ success() }}.
Your problem is that the server is not completely up yet. There are several possibilites to solve this problem:
Wait for few seconds with the sleep command:
- name: install 2
shell: bash
run: |
sleep 10 &&
kubectl apply -f ./config/samples/test.yaml
Wait for the port to open e.g. with the tool wait-port
Wait for the port to open with native Linux tools netcat or netcat/netstat
You can alternatively create an exit code yourself, which you can use in the next step from this post:
- name: install 1
id: install1
shell: bash
run: |
make install
make run
echo ::set-output name=exit_code::$?
- name: install 2
if: steps.install1.outputs.exit_code == 0
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
EDIT: I think I have found your problem. By executing make run your server runs permanently and blocks the further processing of the action. You could for example run make run in the background with make run &. And I think you won't need the two jobs then either. For more details during the build you can add the debug option.

Use the needs keyword. You would also want to be separating these into different jobs.
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: install 1
shell: bash
run: |
make install
make run
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml

Related

How to start .exe file with Github self-hosted runner

I'm new to Github Actions and currently trying to make a CD pipeline for my web application that I run on a windows server as an executable.
In order to do so I created a self-hosted github runner with a YAML file that does the following steps on push:
Checkout repo
steps:
- name: Checkout
run: |
cd C:\Users\Administrator\source\repos\FtbMultiTask
git pull origin master
Kill running process
- name: Stop and build with dotnet
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
Publish to Folder with specific publish profile
- name: dotnet publish
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
dotnet publish C:\Users\Administrator\source\repos\FtbMultiTask\Refresh\FtbMultiTask.csproj -c Release --output "C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA" -p:PublishProfile=FolderProfile
Start .exe file to actually start the app
- name: Start
run: |
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
It's the 4th step where things go wrong. The exe does start (it seems like it) but then immediately closes.
I tried different options of start command and even put a start Cmd.exe in front of it to see if it behaves in the same manner with Cmd.exe, and the answer is yes - it closes it immediately.
I Also tried waiting like this:
- name: Start
run: |
start Cmd.exe /wait
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
and like this:
- name: Start
run: |
cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
start Refresh.exe
I'm unable either to fix it or even detect the problem. Github produces the following output:
Run cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
2 cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
3 start Refresh.exe
4 shell: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE -command ". '{0}'"
5 env:
6 DOTNET_ROOT: C:\Program Files\dotnet

Github Action : failed with "lost connection"

We are Trying to build our gcp instance templated using GitHub Actions.
Where we try to build our java archives and transfer it to GCP instance from GitHub Ubuntu machine.
We have set sshkey to access the GCP instance from Ubuntu machines using
ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:cat ~/.ssh/temp.pub > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
We get error response when we try to run the following command
scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#:/opt/code.war
The script worked fine till 5th Dec 2022 and started giving error from 6th Dec 2022.
We used to face some failures but the same worked fine when we re-run the build.
build.yml
# This is a basic workflow to help you get started with Actions
name: build-web
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the develop branch
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
short_sha:
description: 'Git sha on which build will be created'
required: true
default: ''
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.short_sha }}
# Build using mvn
- name: Set up JDK 8
uses: actions/setup-java#v2
with:
java-version: '8'
distribution: 'adopt'
cache: 'maven'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Gcloud Version
run: gcloud --version
- name: Run build script
run: python ./.github/workflows/build.py ${{ github.event.inputs.short_sha }}
Following is the error log.
We have tried multiple builds for other builds in the other builds in the same repository- those failed too
We have confirmed that the secret is still active.
And the build also is successful hence the file "code.war" exists
Any idea of how to figure out the root cause or any one facing similar issue
###Running: ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:`cat ~/.ssh/temp.pub` > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'')
####################################
#########Transfer public key to instance############
###Running: cd ~/ && pwd
###Exit Code: 0
###RESPONSE:(b'/home/runner\n', b'')
###Running: gcloud compute instances add-metadata dummy-temp-web --project=projectname --zone=us-east1-b --metadata-from-file ssh-keys=/home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'Updated [https://www.googleapis.com/compute/v1/projects/projectname/zones/us-east1-b/instances/dummy-temp-web].\n')
####################################
#Give time for key to propogate
#########copy to remote############
###Running: scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#<ip>:/opt/code.war
###Exit Code: 1
###RESPONSE:(b'', b"Warning: Permanently added '<ip>' (ECDSA) to the list of known hosts.\r\nPermission denied, please try again.\r\nPermission denied, please try again.\r\nroot#<ip>: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\nlost connection\n")
Traceback (most recent call last):
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line 100, in <module>
execute(f'***copyBuldFileToRemoteCMD***', False)
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line [30](https://github.com/company/code/actions/runs/3628509641/jobs/6119611343#step:7:31), in execute
raise Exception(f'Sorry, bad exit code***process.returncode***')
Exception: Sorry, bad exit code1
I had also similar issue when I was using ubuntu-latest as job runner in yml file.
Instead of ubuntu-latest I used ubuntu-20.04 then issue resolved for me.
you can try this in your yml file
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
It is working for me.

Self-Hosted Github Runner: start a background server process in a job and let it run after the job ends

I am trying to do the following:
In a Self-hosted runner, run a server process. Invoke it using curl. This process monitors something during the execution of the next "another job"
Run "another job" (not on Self-hosted runner)
In the Self-hosted runner, call curl again to collect statistics.
I have the following jobs in my Github Actions workflow:
start-process: # THIS JOB IS SUPPOSED TO START A SERVER IN BACKGROUND
name: Start
needs: start-runner # previous job starts the runner
runs-on: ${{ needs.start-runner.outputs.label }} # run the job on the newly created runner
steps:
- uses: actions/checkout#v2
- name: npm install
working-directory: ./monitor
run: npm install
- name: npm start
run: nohup npm start & # this starts the server in background
working-directory: ./monitor
- run: curl http://localhost:8080/start
- run: ps aux
anotherjob:
// perform another job...
and according to ps aux I have my server process there:
root 4746 4.8 1.2 721308 48396 ? Sl 11:20 0:00 npm
root 4757 85.8 4.9 736308 196788 ? Sl 11:20 0:04 node /actions-runner/_work/<myrepo>/<myrepo>/monitor/node_modules/.bin/ts-node src/main.ts
root 4773 0.0 0.0 124052 2924 ? S 11:20 0:00 /usr/bin/bash -e /actions-runner/_work/_temp/51a508d8-9c2c-4723-9691-3252c8d53d88.sh
But in the Actions logs logs I have then under "Complete Job":
Cleaning up orphan processes
Terminate orphan process: pid (4731) (npm)
Terminate orphan process: pid (4742) (node)
So when I have another step
statistic:
name: Output Statistics
needs:
- start-runner
- start-process
- anotherjob
runs-on: ${{ needs.start-runner.outputs.label }}
steps:
- run: ps aux
- run: curl http://localhost:8080/statistics
and this fails: ps aux has no process anymore and curl can not connect to the address.
Question: how within the first job can I launch a process that stays on the runner after the job ends?
Turns out that in order to "protect" process from cleanup, it can be run as
run: RUNNER_TRACKING_ID="" && (nohup npm start&).
This suggestion was found in this thread on GitHub.

Updating GitHub issues from GitHub Actions

I was trying to make a GitHub action using some simple scripts (which I already use locally) that I would like to run inside a docker container.
A new issue should trigger the event to update the said issue with its content based on some processing. An example of this might be:
Say I have a list of labels defined in my script and it checks the issue's title and adds a label to the issue.
I'm still reading the GitHub Action's documentation so I may be not completely informed but the issue I seem to have is that in my local machine these scripts use gh cli for doing such tasks (eg. adding labels). So I was wondering if I need to have the gh installed in that docker container or is there a better way to update the issue? I'm very much willing to make these scripts from scratch again using the GitHub's event payloads and stuff as long as I don't have to write in TypeScript.
I've looked around the documentation and couldn't find anything that talked about updating issues. Also couldn't find a similar question being asked here; it may be that I've missed something so if that is the case direct me to relevant material and I would very much appreciate it.
An option could be (as you said) to install GH in that docker container, and then run GH commands.
Example using a container:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://myrepoandimagewithghinstalled
steps:
- name: Github CLI Authentication
run: gh auth login --hostname <your hostname>
- name: Github CLI commands execution samples
run: |
gh command1
gh command2
gh command3
Another option could be to install GH directly on the OS (for exemple ubuntu-latest), authenticate, and then use the "run" option to execute GH command.
Example installing GH on the OS:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Install Github CLI
run: |
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key C99B11DEB97541F0
sudo apt-add-repository https://cli.github.com/packages
sudo apt update
sudo apt install gh
- name: Github CLI Authentication
run: gh auth login --hostname <your hostname>
- name: Github CLI commands execution samples
run: |
gh command1
gh command2
gh command3
Finally, you could also create a script consuming the Github API service to update an ISSUE and execute the script using the run option.
Example to execute a Python script in your workflow:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v2 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8 #install the python needed
- name: execute py script # run the run.py to get the latest data
run: |
python run.py
env:
key: ${{ secrets.key }} # if run.py requires passwords..etc, set it as secrets
- name: export index
.... # use crosponding script or actions to help export.

Avoid GitHub action buffering

I have a Github repo that contains some Linux utilites.
To run test I use GitHub actions and some simple custom workflow that runs on a remote Linux server using self-hosted runner:
name: CMake
on: [push]
env:
BUILD_TYPE: Release
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Build
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: cmake --build . --config $BUILD_TYPE
- name: Run test
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: ./testing/test.sh
The workflow is very simple - I compile the source using cmake and then run a script from the same repo to test the build. The test script is a bash script and looks as following:
#!/bin/bash
export LD_LIBRARY_PATH=/path/to/bin
cd /path/to/bin
echo -e "Starting the tests"
./run_server &
./run_tests
if [ $? -eq 0 ]; then
echo "successful"
else
echo "failed"
exit 1
fi
Here the scripts starts up an compiled application from my code (run_server) and then run the testing utility that communicates with it and prints a result.
Actually I use C printf() inside run_tests to print the output strings. If I run that on a local machine I get output like the following:
Starting the tests
test1 executed Ok
test2 executed Ok
test3 executed Ok
successful
Each test takes about 1 sec. so I the application prints a line like testX executed Ok one per second.
But if it runs using Github actions the output looks a bit different (copied from the Github actions console):
./testing/test.sh
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: Release
Starting the tests
successful
And even in this case the output from the bash script printed out only after the script has finished.
So I have 2 problems:
no printf() output from the test application
the script output (printed using echo) come only after the script has finished
I expected the same behavior from the Github action as I get on a local machine i.e. I see a lines printed ~ 1/sec. immediately.
It looks like Github action buffers all the output until a step executed and ignores all the print from the application that run inside the bash script.
Is there a way to get all the output in real time while a step executes?