I have a Github repo that contains some Linux utilites.
To run test I use GitHub actions and some simple custom workflow that runs on a remote Linux server using self-hosted runner:
name: CMake
on: [push]
env:
BUILD_TYPE: Release
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Build
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: cmake --build . --config $BUILD_TYPE
- name: Run test
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: ./testing/test.sh
The workflow is very simple - I compile the source using cmake and then run a script from the same repo to test the build. The test script is a bash script and looks as following:
#!/bin/bash
export LD_LIBRARY_PATH=/path/to/bin
cd /path/to/bin
echo -e "Starting the tests"
./run_server &
./run_tests
if [ $? -eq 0 ]; then
echo "successful"
else
echo "failed"
exit 1
fi
Here the scripts starts up an compiled application from my code (run_server) and then run the testing utility that communicates with it and prints a result.
Actually I use C printf() inside run_tests to print the output strings. If I run that on a local machine I get output like the following:
Starting the tests
test1 executed Ok
test2 executed Ok
test3 executed Ok
successful
Each test takes about 1 sec. so I the application prints a line like testX executed Ok one per second.
But if it runs using Github actions the output looks a bit different (copied from the Github actions console):
./testing/test.sh
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: Release
Starting the tests
successful
And even in this case the output from the bash script printed out only after the script has finished.
So I have 2 problems:
no printf() output from the test application
the script output (printed using echo) come only after the script has finished
I expected the same behavior from the Github action as I get on a local machine i.e. I see a lines printed ~ 1/sec. immediately.
It looks like Github action buffers all the output until a step executed and ignores all the print from the application that run inside the bash script.
Is there a way to get all the output in real time while a step executes?
Related
I'm new to Github Actions and currently trying to make a CD pipeline for my web application that I run on a windows server as an executable.
In order to do so I created a self-hosted github runner with a YAML file that does the following steps on push:
Checkout repo
steps:
- name: Checkout
run: |
cd C:\Users\Administrator\source\repos\FtbMultiTask
git pull origin master
Kill running process
- name: Stop and build with dotnet
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
Publish to Folder with specific publish profile
- name: dotnet publish
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
dotnet publish C:\Users\Administrator\source\repos\FtbMultiTask\Refresh\FtbMultiTask.csproj -c Release --output "C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA" -p:PublishProfile=FolderProfile
Start .exe file to actually start the app
- name: Start
run: |
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
It's the 4th step where things go wrong. The exe does start (it seems like it) but then immediately closes.
I tried different options of start command and even put a start Cmd.exe in front of it to see if it behaves in the same manner with Cmd.exe, and the answer is yes - it closes it immediately.
I Also tried waiting like this:
- name: Start
run: |
start Cmd.exe /wait
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
and like this:
- name: Start
run: |
cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
start Refresh.exe
I'm unable either to fix it or even detect the problem. Github produces the following output:
Run cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
2 cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
3 start Refresh.exe
4 shell: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE -command ". '{0}'"
5 env:
6 DOTNET_ROOT: C:\Program Files\dotnet
I'm trying to set-up and run a GitHub action in a nested folder of the repo.
I thought I could use working-directory, but if I write this:
jobs:
test-working-directory:
runs-on: ubuntu-latest
name: Test
defaults:
run:
working-directory: my_folder
steps:
- run: echo test
I get an error:
Run echo test
echo test
shell: /usr/bin/bash -e {0}
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/my_repo/my_repo/my_folder'. No such file or directory
I notice my_repo appears twice in the path of the error.
Here is the run on my repo, where:
my_repo = novade_flutter_packages
my_folder = packages
What am I missing?
You didn't check out the repository on the second job.
Each job runs on a different instance, so you have to checkout it separately for each one of them.
I am trying to create a bug tracker that allows me to record the error messages of the python script I run. Here is my YAML file at the moment:
name: Bug Tracker
#Controls when the workflow will run
on:
# Triggers the workflow on push request events
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab (for testing)
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
# Self Hosted Runner
runs-on: windows-latest
# Steps for tracker to get activated
steps:
# Checks-out your repository under BugTracker so the job can find it
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
# Runs main script to look for
- name: Run File and collect bug
id: response
run: |
echo Running File...
python script.py
echo "${{steps.response.outputs.result}}"
Every time I run the workflow I can't save the error code. By save the error code, I mean for example... if the python script produces "Process completed with exit code 1." then I can save that to a txt file. I've seen cases where I could save if it runs successfully. I've thought about getting the error in the python script but I don't want to have to add the same code to every file if I don't have to. Any thoughts? Greatly appreciate any help or suggestions.
Update: I have been able to successfully use code in python to save to the txt file. However, I'm still looking to do this in Github if anyone has any suggestions.
You could :
redirect the output to a log file while capturing the exit code
set an output with the exit code value like:
echo ::set-output name=status::$status
in another step, commit the log file
in a final step, check that the exit code is success (0) otherwise exit the script with this exit code
Using ubuntu-latest, it would be like this:
name: Bug Tracker
on: [push,workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Run File and collect logs
id: run
run: |
echo Running File...
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
echo ::set-output name=status::$status
- name: Commit log
run: |
git config --global user.name 'GitHub Action'
git config --global user.email 'action#github.com'
git add -A
git checkout master
git diff-index --quiet HEAD || git commit -am "deploy workflow logs"
git push
- name: Check run status
if: steps.run.outputs.status != '0'
run: exit "${{ steps.run.outputs.status }}"
On windows, I think you would need to update this part:
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
I am trying to do the following:
In a Self-hosted runner, run a server process. Invoke it using curl. This process monitors something during the execution of the next "another job"
Run "another job" (not on Self-hosted runner)
In the Self-hosted runner, call curl again to collect statistics.
I have the following jobs in my Github Actions workflow:
start-process: # THIS JOB IS SUPPOSED TO START A SERVER IN BACKGROUND
name: Start
needs: start-runner # previous job starts the runner
runs-on: ${{ needs.start-runner.outputs.label }} # run the job on the newly created runner
steps:
- uses: actions/checkout#v2
- name: npm install
working-directory: ./monitor
run: npm install
- name: npm start
run: nohup npm start & # this starts the server in background
working-directory: ./monitor
- run: curl http://localhost:8080/start
- run: ps aux
anotherjob:
// perform another job...
and according to ps aux I have my server process there:
root 4746 4.8 1.2 721308 48396 ? Sl 11:20 0:00 npm
root 4757 85.8 4.9 736308 196788 ? Sl 11:20 0:04 node /actions-runner/_work/<myrepo>/<myrepo>/monitor/node_modules/.bin/ts-node src/main.ts
root 4773 0.0 0.0 124052 2924 ? S 11:20 0:00 /usr/bin/bash -e /actions-runner/_work/_temp/51a508d8-9c2c-4723-9691-3252c8d53d88.sh
But in the Actions logs logs I have then under "Complete Job":
Cleaning up orphan processes
Terminate orphan process: pid (4731) (npm)
Terminate orphan process: pid (4742) (node)
So when I have another step
statistic:
name: Output Statistics
needs:
- start-runner
- start-process
- anotherjob
runs-on: ${{ needs.start-runner.outputs.label }}
steps:
- run: ps aux
- run: curl http://localhost:8080/statistics
and this fails: ps aux has no process anymore and curl can not connect to the address.
Question: how within the first job can I launch a process that stays on the runner after the job ends?
Turns out that in order to "protect" process from cleanup, it can be run as
run: RUNNER_TRACKING_ID="" && (nohup npm start&).
This suggestion was found in this thread on GitHub.
I've two GitHub actions which should run one after other,
the first install 1 is installing & running a server (e.g. server is running on port 3000), this works however the install 1 is not finishing ( is the server is up and you dont get "stop" signal which is ok) but I need to proceed to the next step install 2 only when the server is up, how should I solve this issue?
In short when you running some process and you need to run other after a while
Please see this repo and the action.
- name: install 1
shell: bash
run: |
make install
make run
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
im using the kubebuilder to generate the project include the makefile...
https://github.com/kubernetes-sigs/kubebuilder
The two processes install 1 and install 2 are already executed one after the other by the implicit if: ${{ success() }}.
Your problem is that the server is not completely up yet. There are several possibilites to solve this problem:
Wait for few seconds with the sleep command:
- name: install 2
shell: bash
run: |
sleep 10 &&
kubectl apply -f ./config/samples/test.yaml
Wait for the port to open e.g. with the tool wait-port
Wait for the port to open with native Linux tools netcat or netcat/netstat
You can alternatively create an exit code yourself, which you can use in the next step from this post:
- name: install 1
id: install1
shell: bash
run: |
make install
make run
echo ::set-output name=exit_code::$?
- name: install 2
if: steps.install1.outputs.exit_code == 0
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
EDIT: I think I have found your problem. By executing make run your server runs permanently and blocks the further processing of the action. You could for example run make run in the background with make run &. And I think you won't need the two jobs then either. For more details during the build you can add the debug option.
Use the needs keyword. You would also want to be separating these into different jobs.
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: install 1
shell: bash
run: |
make install
make run
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml