We are Trying to build our gcp instance templated using GitHub Actions.
Where we try to build our java archives and transfer it to GCP instance from GitHub Ubuntu machine.
We have set sshkey to access the GCP instance from Ubuntu machines using
ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:cat ~/.ssh/temp.pub > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
We get error response when we try to run the following command
scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#:/opt/code.war
The script worked fine till 5th Dec 2022 and started giving error from 6th Dec 2022.
We used to face some failures but the same worked fine when we re-run the build.
build.yml
# This is a basic workflow to help you get started with Actions
name: build-web
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the develop branch
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
short_sha:
description: 'Git sha on which build will be created'
required: true
default: ''
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.short_sha }}
# Build using mvn
- name: Set up JDK 8
uses: actions/setup-java#v2
with:
java-version: '8'
distribution: 'adopt'
cache: 'maven'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Gcloud Version
run: gcloud --version
- name: Run build script
run: python ./.github/workflows/build.py ${{ github.event.inputs.short_sha }}
Following is the error log.
We have tried multiple builds for other builds in the other builds in the same repository- those failed too
We have confirmed that the secret is still active.
And the build also is successful hence the file "code.war" exists
Any idea of how to figure out the root cause or any one facing similar issue
###Running: ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:`cat ~/.ssh/temp.pub` > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'')
####################################
#########Transfer public key to instance############
###Running: cd ~/ && pwd
###Exit Code: 0
###RESPONSE:(b'/home/runner\n', b'')
###Running: gcloud compute instances add-metadata dummy-temp-web --project=projectname --zone=us-east1-b --metadata-from-file ssh-keys=/home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'Updated [https://www.googleapis.com/compute/v1/projects/projectname/zones/us-east1-b/instances/dummy-temp-web].\n')
####################################
#Give time for key to propogate
#########copy to remote############
###Running: scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#<ip>:/opt/code.war
###Exit Code: 1
###RESPONSE:(b'', b"Warning: Permanently added '<ip>' (ECDSA) to the list of known hosts.\r\nPermission denied, please try again.\r\nPermission denied, please try again.\r\nroot#<ip>: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\nlost connection\n")
Traceback (most recent call last):
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line 100, in <module>
execute(f'***copyBuldFileToRemoteCMD***', False)
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line [30](https://github.com/company/code/actions/runs/3628509641/jobs/6119611343#step:7:31), in execute
raise Exception(f'Sorry, bad exit code***process.returncode***')
Exception: Sorry, bad exit code1
I had also similar issue when I was using ubuntu-latest as job runner in yml file.
Instead of ubuntu-latest I used ubuntu-20.04 then issue resolved for me.
you can try this in your yml file
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
It is working for me.
Related
I couldn't make this workflow work, I'm always receiving this error every step must define a "uses" or "run" key, but looking at my script I don't see any issues, can someone pls help me fix this? It does not seem like a - problem as it usually is.
# This is a basic workflow to help you get started with Actions
name: Build Digital Ocean Image
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches:
- '*'
- '*/*'
- '**'
- '!master'
pull_request:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
packages:
runs-on: ubuntu-latest
steps:
- name: Install packages
run: |
apt-get install curl -y
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install packer
apt-get install ansible -y
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
# runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout
uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Packer init
env:
DIGITALOCEAN_TOKEN=${{ secrets.DIGITALOCEAN_TOKEN }}
run: packer init
working-directory: /home/runner/work/packer-do-custom-images/demo
# Runs a set of commands using the runners shell
- name: Packer build
run: packer build .
working-directory: /home/runner/work/packer-do-custom-images/demo
The runs-on directive is missing (commented!), just add/uncomment it after the name, like:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
To fix the issue.
I would suggest to use the actionlint tool to discover error like this, as example:
> actionlint .github/workflows/test.yaml
.github/workflows/test.yaml:35:3: "runs-on" section is missing in job "build" [syntax-check]
|
35 | build:
| ^~~~~~
UPDATE:
As side notes also the env variable as issue: you should use : instead of = like:
- name: Packer init
env:
DIGITALOCEAN_TOKEN: ${{ secrets.DIGITALOCEAN_TOKEN }}
I'm trying to set-up and run a GitHub action in a nested folder of the repo.
I thought I could use working-directory, but if I write this:
jobs:
test-working-directory:
runs-on: ubuntu-latest
name: Test
defaults:
run:
working-directory: my_folder
steps:
- run: echo test
I get an error:
Run echo test
echo test
shell: /usr/bin/bash -e {0}
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/my_repo/my_repo/my_folder'. No such file or directory
I notice my_repo appears twice in the path of the error.
Here is the run on my repo, where:
my_repo = novade_flutter_packages
my_folder = packages
What am I missing?
You didn't check out the repository on the second job.
Each job runs on a different instance, so you have to checkout it separately for each one of them.
I integrate STM32CubeProgrammer with github action for deploy.
Although the command exit with error, github action workflow did not fail so that the result is success.
Because the command print "Error" at console log, I just want to grep this and make the workflow fail. how can I do that?
I am using self-hosted runner(Windows 10)
jobs:
build:...
deployment:
# The type of runner that the job will run on
runs-on: [ self-hosted, deploy ]
needs: build
environment: production
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Runs a set of commands using the runners shell
- name: deploy
run: |
STM32_Programmer_CLI.exe -c port=SWD -w ${{ secrets.ELF_FILE }} ${{ secrets.START_ADDR }}
- name: start
run: |
STM32_Programmer_CLI.exe -c port=SWD -c port=SWD -s ${{ secrets.START_ADDR }}
I have written a github action to retrieve the changed sql files and lint those changed files using sqlfluff.
Here is my github action code:
name: files_lint
on:
- pull_request
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v2
- name: Install Python
uses: "actions/setup-python#v2"
with:
python-version: "3.7"
- name: install sqlfluff
run: "pip install sqlfluff"
- name: Get changed .sql files
id: linting
run: some code to get the changed files
- name: Linting files started
id: sql_linting
if: steps.linting.outputs.lintees != ''
shell: bash -l {0}
run: ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force
But when I run ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force on the changed sql files in the above github action, I'm getting an error
/home/runner/work/_temp/a41i1c89a4.sh: line 1: test.sql: Permission denied
Error: Process completed with exit code 126.
You can’t redirect files like this:
run: ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force
This is attempting to write the output of whatever that command is - but I’d guess it’s a list of files rather than a command?
You should pass as parameters (assuming it’s a list of files):
run: sqlfluff fix --force ${{ steps.linting.outputs.lintees }}
Also I presume you’re going to do something with it afterwards? If not the fixed files will not do anything. If you just want to check the files sqlfluff lint would be better than sqlfluff fix (and catches more issues as sqlfluff fix only looks at rules it can fix).
For all developers who created shell script (.sh) locally on Windows or in Windows Subsystem Linux (WSL), or cloned the git repository without knowing on which file system this shell script was created, make sure that shell script is Linux executable!
Linux
chmod +x script.sh
Windows
git update-index --chmod=+x script.sh
Finally, don't forget to push your changes.
git add script.sh
git commit -m'Making script.sh executable'
git push
I am trying to create a bug tracker that allows me to record the error messages of the python script I run. Here is my YAML file at the moment:
name: Bug Tracker
#Controls when the workflow will run
on:
# Triggers the workflow on push request events
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab (for testing)
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
# Self Hosted Runner
runs-on: windows-latest
# Steps for tracker to get activated
steps:
# Checks-out your repository under BugTracker so the job can find it
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
# Runs main script to look for
- name: Run File and collect bug
id: response
run: |
echo Running File...
python script.py
echo "${{steps.response.outputs.result}}"
Every time I run the workflow I can't save the error code. By save the error code, I mean for example... if the python script produces "Process completed with exit code 1." then I can save that to a txt file. I've seen cases where I could save if it runs successfully. I've thought about getting the error in the python script but I don't want to have to add the same code to every file if I don't have to. Any thoughts? Greatly appreciate any help or suggestions.
Update: I have been able to successfully use code in python to save to the txt file. However, I'm still looking to do this in Github if anyone has any suggestions.
You could :
redirect the output to a log file while capturing the exit code
set an output with the exit code value like:
echo ::set-output name=status::$status
in another step, commit the log file
in a final step, check that the exit code is success (0) otherwise exit the script with this exit code
Using ubuntu-latest, it would be like this:
name: Bug Tracker
on: [push,workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Run File and collect logs
id: run
run: |
echo Running File...
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
echo ::set-output name=status::$status
- name: Commit log
run: |
git config --global user.name 'GitHub Action'
git config --global user.email 'action#github.com'
git add -A
git checkout master
git diff-index --quiet HEAD || git commit -am "deploy workflow logs"
git push
- name: Check run status
if: steps.run.outputs.status != '0'
run: exit "${{ steps.run.outputs.status }}"
On windows, I think you would need to update this part:
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt