Github Actions: How to create multiline env variable for serveral jobs - github

I want to add serveral lines to a code file. And this in various jobs. So I created one job which create a text file and upload it.
create_file:
runs-on: ubuntu-latest
steps:
- shell: bash
run: |
cat << EOF > data.txt
A = "..."
B = "..."
C = "..."
...
EOF
- name: Create data file
uses: actions/upload-artifact#1
with:
name: configuration
path: data.txt
At the next job I upload the file and want to append this lines to a code file.
test_file:
runs-on: ubuntu-latest
needs: [create_file]
steps:
- name: Download file
uses: actions/download-artifact#v1
with:
name: configuration
path: configuration/data.txt
- shell: bash
run: |
cat configuration/data.txt >> main.py
python main.py
I have the problem that the second job is too fast and searches after the data.txt file before it is already uploaded and how I could handle the to append content. The command echo "..." >> main.py for each line would be very annoying.
Update:
Now I get for the job data.txt following error messages:
Download action repository 'actions/upload-artifact#1'
##[warning]Failed to download action 'https://api.github.com/repos/actions/upload-artifact/tarball/1'. Error Response status code does not indicate success: 404 (Not Found).
...
##[error]Response status code does not indicate success: 404 (Not Found).

The error was very stupid. The line actions/upload-artifact#1 should be actions/upload-artifact#v1.

Related

Github Action : failed with "lost connection"

We are Trying to build our gcp instance templated using GitHub Actions.
Where we try to build our java archives and transfer it to GCP instance from GitHub Ubuntu machine.
We have set sshkey to access the GCP instance from Ubuntu machines using
ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:cat ~/.ssh/temp.pub > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
We get error response when we try to run the following command
scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#:/opt/code.war
The script worked fine till 5th Dec 2022 and started giving error from 6th Dec 2022.
We used to face some failures but the same worked fine when we re-run the build.
build.yml
# This is a basic workflow to help you get started with Actions
name: build-web
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the develop branch
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
short_sha:
description: 'Git sha on which build will be created'
required: true
default: ''
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.short_sha }}
# Build using mvn
- name: Set up JDK 8
uses: actions/setup-java#v2
with:
java-version: '8'
distribution: 'adopt'
cache: 'maven'
- name: Build with Maven
run: mvn --batch-mode --update-snapshots verify
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
with:
service_account_key: ${{ secrets.GCP_SA_KEY }}
export_default_credentials: true
- name: Gcloud Version
run: gcloud --version
- name: Run build script
run: python ./.github/workflows/build.py ${{ github.event.inputs.short_sha }}
Following is the error log.
We have tried multiple builds for other builds in the other builds in the same repository- those failed too
We have confirmed that the secret is still active.
And the build also is successful hence the file "code.war" exists
Any idea of how to figure out the root cause or any one facing similar issue
###Running: ssh-keygen -t rsa -f ~/.ssh/temp -C root -q -N "" && chmod 400 ~/.ssh/temp && chmod 400 ~/.ssh/temp.pub && echo root:`cat ~/.ssh/temp.pub` > ~/.ssh/temp-formated.pub && chmod 700 /home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'')
####################################
#########Transfer public key to instance############
###Running: cd ~/ && pwd
###Exit Code: 0
###RESPONSE:(b'/home/runner\n', b'')
###Running: gcloud compute instances add-metadata dummy-temp-web --project=projectname --zone=us-east1-b --metadata-from-file ssh-keys=/home/runner/.ssh/temp-formated.pub
###Exit Code: 0
###RESPONSE:(b'', b'Updated [https://www.googleapis.com/compute/v1/projects/projectname/zones/us-east1-b/instances/dummy-temp-web].\n')
####################################
#Give time for key to propogate
#########copy to remote############
###Running: scp -o StrictHostKeyChecking=no -i /home/runner/.ssh/temp ./code-web/target/code.war root#<ip>:/opt/code.war
###Exit Code: 1
###RESPONSE:(b'', b"Warning: Permanently added '<ip>' (ECDSA) to the list of known hosts.\r\nPermission denied, please try again.\r\nPermission denied, please try again.\r\nroot#<ip>: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\nlost connection\n")
Traceback (most recent call last):
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line 100, in <module>
execute(f'***copyBuldFileToRemoteCMD***', False)
File "/home/runner/work/code/code/./.github/workflows/gcloudBuild.py", line [30](https://github.com/company/code/actions/runs/3628509641/jobs/6119611343#step:7:31), in execute
raise Exception(f'Sorry, bad exit code***process.returncode***')
Exception: Sorry, bad exit code1
I had also similar issue when I was using ubuntu-latest as job runner in yml file.
Instead of ubuntu-latest I used ubuntu-20.04 then issue resolved for me.
you can try this in your yml file
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-20.04
It is working for me.

How to use working-directory in GitHub actions?

I'm trying to set-up and run a GitHub action in a nested folder of the repo.
I thought I could use working-directory, but if I write this:
jobs:
test-working-directory:
runs-on: ubuntu-latest
name: Test
defaults:
run:
working-directory: my_folder
steps:
- run: echo test
I get an error:
Run echo test
echo test
shell: /usr/bin/bash -e {0}
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/my_repo/my_repo/my_folder'. No such file or directory
I notice my_repo appears twice in the path of the error.
Here is the run on my repo, where:
my_repo = novade_flutter_packages
my_folder = packages
What am I missing?
You didn't check out the repository on the second job.
Each job runs on a different instance, so you have to checkout it separately for each one of them.

Is there a way to log error responses from Github Actions?

I am trying to create a bug tracker that allows me to record the error messages of the python script I run. Here is my YAML file at the moment:
name: Bug Tracker
#Controls when the workflow will run
on:
# Triggers the workflow on push request events
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab (for testing)
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
# Self Hosted Runner
runs-on: windows-latest
# Steps for tracker to get activated
steps:
# Checks-out your repository under BugTracker so the job can find it
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
# Runs main script to look for
- name: Run File and collect bug
id: response
run: |
echo Running File...
python script.py
echo "${{steps.response.outputs.result}}"
Every time I run the workflow I can't save the error code. By save the error code, I mean for example... if the python script produces "Process completed with exit code 1." then I can save that to a txt file. I've seen cases where I could save if it runs successfully. I've thought about getting the error in the python script but I don't want to have to add the same code to every file if I don't have to. Any thoughts? Greatly appreciate any help or suggestions.
Update: I have been able to successfully use code in python to save to the txt file. However, I'm still looking to do this in Github if anyone has any suggestions.
You could :
redirect the output to a log file while capturing the exit code
set an output with the exit code value like:
echo ::set-output name=status::$status
in another step, commit the log file
in a final step, check that the exit code is success (0) otherwise exit the script with this exit code
Using ubuntu-latest, it would be like this:
name: Bug Tracker
on: [push,workflow_dispatch]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Run File and collect logs
id: run
run: |
echo Running File...
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt
echo ::set-output name=status::$status
- name: Commit log
run: |
git config --global user.name 'GitHub Action'
git config --global user.email 'action#github.com'
git add -A
git checkout master
git diff-index --quiet HEAD || git commit -am "deploy workflow logs"
git push
- name: Check run status
if: steps.run.outputs.status != '0'
run: exit "${{ steps.run.outputs.status }}"
On windows, I think you would need to update this part:
status=$(python script.py > log.txt 2>&1; echo $?)
cat log.txt

How to save github secrets content in a file in github action

I was trying to do the following :
echo ${{secrets.key}} > myfile
But unfortunately, this doesn't work since myfile would be empty after this when i checked.
How do i save the content of github secret into a file ?
You can first pass the secret to an env var, then echo it to the file, remember to quote the env var to prevent line feed from breaking the command.
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: print secrets
run: |
echo "$MY_SECRET" >> config.ini
cat config.ini
shell: bash
env:
MY_SECRET: ${{secrets.KEY}}
Check that the secrets.key actually exists before doing the above.

Using output from a previous job in a new one in a GitHub Action

For (mainly) pedagogical reasons, I'm trying to run this workflow in GitHub actions:
name: "We 🎔 Perl"
on:
issues:
types: [opened, edited, milestoned]
jobs:
seasonal_greetings:
runs-on: windows-latest
steps:
- name: Maybe greet
id: maybe-greet
env:
HEY: "Hey you!"
GREETING: "Merry Xmas to you too!"
BODY: ${{ github.event.issue.body }}
run: |
$output=(perl -e 'print ($ENV{BODY} =~ /Merry/)?$ENV{GREETING}:$ENV{HEY};')
Write-Output "::set-output name=GREET::$output"
produce_comment:
name: Respond to issue
runs-on: ubuntu-latest
steps:
- name: Dump job context
env:
JOB_CONTEXT: ${{ jobs.maybe-greet.steps.id }}
run: echo "$JOB_CONTEXT"
I need two different jobs, since they use different context (operating systems), but I need to get the output of a step in the first job to the second job. I am trying with several combinations of the jobs context as found here but there does not seem to be any way to do that. Apparently, jobs is just the name of a YAML variable that does not really have a context, and the context job contains just the success or failure. Any idea?
Check the "GitHub Actions: New workflow features" from April 2020, which could help in your case (to reference step outputs from previous jobs)
Job outputs
You can specify a set of outputs that you want to pass to subsequent jobs and then access those values from your needs context.
See documentation:
jobs.<jobs_id>.outputs
A map of outputs for a job.
Job outputs are available to all downstream jobs that depend on this job.
For more information on defining job dependencies, see jobs.<job_id>.needs.
Job outputs are strings, and job outputs containing expressions are evaluated on the runner at the end of each job. Outputs containing secrets are redacted on the runner and not sent to GitHub Actions.
To use job outputs in a dependent job, you can use the needs context.
For more information, see "Context and expression syntax for GitHub Actions."
To use job outputs in a dependent job, you can use the needs context.
Example
jobs:
job1:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
output1: ${{ steps.step1.outputs.test }}
output2: ${{ steps.step2.outputs.test }}
steps:
- id: step1
run: echo "test=hello" >> $GITHUB_OUTPUT
- id: step2
run: echo "test=world" >> $GITHUB_OUTPUT
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- run: echo ${{needs.job1.outputs.output1}} ${{needs.job1.outputs.output2}}
Note the use of $GITHUB_OUTPUT, instead of the older ::set-output now (Oct. 2022) deprecated.
To avoid untrusted logged data to use set-state and set-output workflow commands without the intention of the workflow author we have introduced a new set of environment files to manage state and output.
Jesse Adelman adds in the comments:
This seems to not work well for anything beyond a static string.
How, for example, would I take a multiline text output of step (say, I'm running a pytest or similar) and use that output in another job?
either write the multi-line text to a file (jschmitter's comment)
or base64-encode the output and then decode it in the next job (Nate Karasch's comment)
Update: It's now possible to set job outputs that can be used to transfer string values to downstream jobs. See this answer.
What follows is the original answer. These techniques might still be useful for some use cases.
Write the data to file and use actions/upload-artifact and actions/download-artifact. A bit awkward, but it works.
Create a repository dispatch event and send the data to a second workflow. I prefer this method personally, but the downside is that it needs a repo scoped PAT.
Here is an example of how the second way could work. It uses repository-dispatch action.
name: "We 🎔 Perl"
on:
issues:
types: [opened, edited, milestoned]
jobs:
seasonal_greetings:
runs-on: windows-latest
steps:
- name: Maybe greet
id: maybe-greet
env:
HEY: "Hey you!"
GREETING: "Merry Xmas to you too!"
BODY: ${{ github.event.issue.body }}
run: |
$output=(perl -e 'print ($ENV{BODY} =~ /Merry/)?$ENV{GREETING}:$ENV{HEY};')
Write-Output "::set-output name=GREET::$output"
- name: Repository Dispatch
uses: peter-evans/repository-dispatch#v1
with:
token: ${{ secrets.REPO_ACCESS_TOKEN }}
event-type: my-event
client-payload: '{"greet": "${{ steps.maybe-greet.outputs.GREET }}"}'
This triggers a repository dispatch workflow in the same repository.
name: Repository Dispatch
on:
repository_dispatch:
types: [my-event]
jobs:
myEvent:
runs-on: ubuntu-latest
steps:
- run: echo ${{ github.event.client_payload.greet }}
In my case I wanted to pass an entire build/artifact, not just a string:
name: Build something on Ubuntu then use it on MacOS
on:
workflow_dispatch:
# Allows for manual build trigger
jobs:
buildUbuntuProject:
name: Builds the project on Ubuntu (Put your stuff here)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: some/compile-action#v99
- uses: actions/upload-artifact#v2
# Upload the artifact so the MacOS runner do something with it
with:
name: CompiledProject
path: pathToCompiledProject
doSomethingOnMacOS:
name: Runs the program on MacOS or something
runs-on: macos-latest
needs: buildUbuntuProject # Needed so the job waits for the Ubuntu job to finish
steps:
- uses: actions/download-artifact#master
with:
name: CompiledProject
path: somewhereToPutItOnMacOSRunner
- run: ls somewhereToPutItOnMacOSRunner # See the artifact on the MacOS runner
It is possible to capture the entire output (and return code) of a command within a run step, which I've written up here to hopefully save someone else the headache. Fair warning, it requires a lot of shell trickery and a multiline run to ensure everything happens within a single shell instance.
In my case, I needed to invoke a script and capture the entirety of its stdout for use in a later step, as well as preserve its outcome for error checking:
# capture stdout from script
SCRIPT_OUTPUT=$(./do-something.sh)
# capture exit code as well
SCRIPT_RC=$?
# FYI, this would get stdout AND stderr
SCRIPT_ALL_OUTPUT=$(./do-something.sh 2>&1)
Since Github's job outputs only seem to be able to capture a single line of text, I also had to escape any newlines for the output:
echo "::set-output name=stdout::${SCRIPT_OUTPUT//$'\n'/\\n}"
Additionally, I needed to ultimately return the script's exit code to correctly indicate whether it failed. The whole shebang ends up looking like this:
- name: A run step with stdout as a captured output
id: myscript
run: |
# run in subshell, capturiing stdout to var
SCRIPT_OUTPUT=$(./do-something.sh)
# capture exit code too
SCRIPT_RC=$?
# print a single line output for github
echo "::set-output name=stdout::${SCRIPT_OUTPUT//$'\n'/\\n}"
# exit with the script status
exit $SCRIPT_RC
continue-on-error: true
- name: Add above outcome and output as an issue comment
uses: actions/github-script#v5
env:
STEP_OUTPUT: ${{ steps.myscript.outputs.stdout }}
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
// indicates whather script succeeded or not
let comment = `Script finished with \`${{ steps.myscript.outcome }}\`\n`;
// adds stdout, unescaping newlines again to make it readable
comment += `<details><summary>Show Output</summary>
\`\`\`
${process.env.STEP_OUTPUT.replace(/\\n/g, '\n')}
\`\`\`
</details>`;
// add the whole damn thing as an issue comment
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: comment
})
Edit: there is also an action to accomplish this with much less bootstrapping, which I only just found.
2022 October update: GitHub is deprecating set-output and recommends to use GITHUB_OUTPUT instead. The syntax for defining the outputs and referencing them in other steps, jobs.
An example from the docs:
- name: Set color
id: random-color-generator
run: echo "SELECTED_COLOR=green" >> $GITHUB_OUTPUT
- name: Get color
run: echo "The selected color is ${{ steps.random-color-generator.outputs.SELECTED_COLOR }}"