I'm new to Github Actions and currently trying to make a CD pipeline for my web application that I run on a windows server as an executable.
In order to do so I created a self-hosted github runner with a YAML file that does the following steps on push:
Checkout repo
steps:
- name: Checkout
run: |
cd C:\Users\Administrator\source\repos\FtbMultiTask
git pull origin master
Kill running process
- name: Stop and build with dotnet
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
Publish to Folder with specific publish profile
- name: dotnet publish
run: |
taskkill /fi "IMAGENAME eq Refresh.exe"
dotnet publish C:\Users\Administrator\source\repos\FtbMultiTask\Refresh\FtbMultiTask.csproj -c Release --output "C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA" -p:PublishProfile=FolderProfile
Start .exe file to actually start the app
- name: Start
run: |
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
It's the 4th step where things go wrong. The exe does start (it seems like it) but then immediately closes.
I tried different options of start command and even put a start Cmd.exe in front of it to see if it behaves in the same manner with Cmd.exe, and the answer is yes - it closes it immediately.
I Also tried waiting like this:
- name: Start
run: |
start Cmd.exe /wait
start C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA\Refresh.exe
and like this:
- name: Start
run: |
cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
start Refresh.exe
I'm unable either to fix it or even detect the problem. Github produces the following output:
Run cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
2 cd C:\Users\Administrator\Desktop\PublishProjects\FTB_MPA
3 start Refresh.exe
4 shell: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE -command ". '{0}'"
5 env:
6 DOTNET_ROOT: C:\Program Files\dotnet
Related
I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.
I am trying to create a bug tracker that allows me to record the error messages of the python script I run. Here is my YAML file at the moment:
name: Bug Tracker
#Controls when the workflow will run
on:
# Triggers the workflow on push request events
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab (for testing)
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
# Self Hosted Runner
runs-on: windows-latest
# Steps for tracker to get activated
steps:
# Checks-out your repository under BugTracker so the job can find it
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
# Runs main script to look for
- name: Run File and collect bug
id: response
run: |
echo Running File...
python script.py
echo "${{steps.response.outputs.result}}"
Every time I run the workflow I can't save the error code. By save the error code, I mean for example... if the python script produces "Process completed with exit code 1." then I can save that to a txt file. I've seen cases where I could save if it runs successfully. I've thought about getting the error in the python script but I don't want to have to add the same code to every file if I don't have to. Any thoughts? Greatly appreciate any help or suggestions.
Update: I have been able to successfully use code in python to save to the txt file. However, I'm still looking to do this in Github if anyone has any suggestions.
You could :
redirect the output to a log file while capturing the exit code
set an output with the exit code value like:
echo ::set-output name=status::$status
in another step, commit the log file
in a final step, check that the exit code is success (0) otherwise exit the script with this exit code
Using ubuntu-latest, it would be like this:
name: Bug Tracker
on: [push,workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Run File and collect logs
id: run
run: |
echo Running File...
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
echo ::set-output name=status::$status
- name: Commit log
run: |
git config --global user.name 'GitHub Action'
git config --global user.email 'action#github.com'
git add -A
git checkout master
git diff-index --quiet HEAD || git commit -am "deploy workflow logs"
git push
- name: Check run status
if: steps.run.outputs.status != '0'
run: exit "${{ steps.run.outputs.status }}"
On windows, I think you would need to update this part:
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
I am trying to do the following:
In a Self-hosted runner, run a server process. Invoke it using curl. This process monitors something during the execution of the next "another job"
Run "another job" (not on Self-hosted runner)
In the Self-hosted runner, call curl again to collect statistics.
I have the following jobs in my Github Actions workflow:
start-process: # THIS JOB IS SUPPOSED TO START A SERVER IN BACKGROUND
name: Start
needs: start-runner # previous job starts the runner
runs-on: ${{ needs.start-runner.outputs.label }} # run the job on the newly created runner
steps:
- uses: actions/checkout#v2
- name: npm install
working-directory: ./monitor
run: npm install
- name: npm start
run: nohup npm start & # this starts the server in background
working-directory: ./monitor
- run: curl http://localhost:8080/start
- run: ps aux
anotherjob:
// perform another job...
and according to ps aux I have my server process there:
root 4746 4.8 1.2 721308 48396 ? Sl 11:20 0:00 npm
root 4757 85.8 4.9 736308 196788 ? Sl 11:20 0:04 node /actions-runner/_work/<myrepo>/<myrepo>/monitor/node_modules/.bin/ts-node src/main.ts
root 4773 0.0 0.0 124052 2924 ? S 11:20 0:00 /usr/bin/bash -e /actions-runner/_work/_temp/51a508d8-9c2c-4723-9691-3252c8d53d88.sh
But in the Actions logs logs I have then under "Complete Job":
Cleaning up orphan processes
Terminate orphan process: pid (4731) (npm)
Terminate orphan process: pid (4742) (node)
So when I have another step
statistic:
name: Output Statistics
needs:
- start-runner
- start-process
- anotherjob
runs-on: ${{ needs.start-runner.outputs.label }}
steps:
- run: ps aux
- run: curl http://localhost:8080/statistics
and this fails: ps aux has no process anymore and curl can not connect to the address.
Question: how within the first job can I launch a process that stays on the runner after the job ends?
Turns out that in order to "protect" process from cleanup, it can be run as
run: RUNNER_TRACKING_ID="" && (nohup npm start&).
This suggestion was found in this thread on GitHub.
I've two GitHub actions which should run one after other,
the first install 1 is installing & running a server (e.g. server is running on port 3000), this works however the install 1 is not finishing ( is the server is up and you dont get "stop" signal which is ok) but I need to proceed to the next step install 2 only when the server is up, how should I solve this issue?
In short when you running some process and you need to run other after a while
Please see this repo and the action.
- name: install 1
shell: bash
run: |
make install
make run
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
im using the kubebuilder to generate the project include the makefile...
https://github.com/kubernetes-sigs/kubebuilder
The two processes install 1 and install 2 are already executed one after the other by the implicit if: ${{ success() }}.
Your problem is that the server is not completely up yet. There are several possibilites to solve this problem:
Wait for few seconds with the sleep command:
- name: install 2
shell: bash
run: |
sleep 10 &&
kubectl apply -f ./config/samples/test.yaml
Wait for the port to open e.g. with the tool wait-port
Wait for the port to open with native Linux tools netcat or netcat/netstat
You can alternatively create an exit code yourself, which you can use in the next step from this post:
- name: install 1
id: install1
shell: bash
run: |
make install
make run
echo ::set-output name=exit_code::$?
- name: install 2
if: steps.install1.outputs.exit_code == 0
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
EDIT: I think I have found your problem. By executing make run your server runs permanently and blocks the further processing of the action. You could for example run make run in the background with make run &. And I think you won't need the two jobs then either. For more details during the build you can add the debug option.
Use the needs keyword. You would also want to be separating these into different jobs.
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: install 1
shell: bash
run: |
make install
make run
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- name: install 2
shell: bash
run: |
kubectl apply -f ./config/samples/test.yaml
I have a Github repo that contains some Linux utilites.
To run test I use GitHub actions and some simple custom workflow that runs on a remote Linux server using self-hosted runner:
name: CMake
on: [push]
env:
BUILD_TYPE: Release
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Build
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: cmake --build . --config $BUILD_TYPE
- name: Run test
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: ./testing/test.sh
The workflow is very simple - I compile the source using cmake and then run a script from the same repo to test the build. The test script is a bash script and looks as following:
#!/bin/bash
export LD_LIBRARY_PATH=/path/to/bin
cd /path/to/bin
echo -e "Starting the tests"
./run_server &
./run_tests
if [ $? -eq 0 ]; then
echo "successful"
else
echo "failed"
exit 1
fi
Here the scripts starts up an compiled application from my code (run_server) and then run the testing utility that communicates with it and prints a result.
Actually I use C printf() inside run_tests to print the output strings. If I run that on a local machine I get output like the following:
Starting the tests
test1 executed Ok
test2 executed Ok
test3 executed Ok
successful
Each test takes about 1 sec. so I the application prints a line like testX executed Ok one per second.
But if it runs using Github actions the output looks a bit different (copied from the Github actions console):
./testing/test.sh
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: Release
Starting the tests
successful
And even in this case the output from the bash script printed out only after the script has finished.
So I have 2 problems:
no printf() output from the test application
the script output (printed using echo) come only after the script has finished
I expected the same behavior from the Github action as I get on a local machine i.e. I see a lines printed ~ 1/sec. immediately.
It looks like Github action buffers all the output until a step executed and ignores all the print from the application that run inside the bash script.
Is there a way to get all the output in real time while a step executes?