What's the difference between GITHUB_REPOSITORY and github.repository? both the value and usage in Github action.
Found the answer from GitHub official docs.
Determining when to use default environment variables or contexts
GitHub Actions includes a collection of variables called contexts and a similar collection of variables called default environment variables. These variables are intended for use at different points in the workflow:
Default environment variables: These variables exist only on the runner that is executing your job. For more information, see "Default environment variables."
Contexts: You can use most contexts at any point in your workflow, including when default environment variables would be unavailable. For example, you can use contexts with expressions to perform initial processing before the job is routed to a runner for execution; this allows you to use a context with the conditional if keyword to determine whether a step should run. Once the job is running, you can also retrieve context variables from the runner that is executing the job, such as runner.os. For details of where you can use various contexts within a workflow, see "Context availability."
The following example demonstrates how these different types of environment variables can be used together in a job:
name: CI
on: push
jobs:
prod-check:
if: ${{ github.ref == 'refs/heads/main' }}
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to production server on branch $GITHUB_REF"
In this example, the if statement checks the github.ref context to determine the current branch name; if the name is refs/heads/main, then the subsequent steps are executed. The if check is processed by GitHub Actions, and the job is only sent to the runner if the result is true. Once the job is sent to the runner, the step is executed and refers to the $GITHUB_REF environment variable from the runner.
Related
We use the "extends" templates feature of Azure yaml pipelines for security purposes. One requirement we have is to not allow the use of a specific task inside a job more than once and if it happens, validation just should fail. I know I can iterate through the job steps and look for the existence of a specific task but is there a way to check how many times a task appears inside a job?
I checked length function but doesn't meet my needs as I'm not able to filter the steps for a specific task.
An example of a check we do today:
${{ each step in parameters.steps }}:
${{ if contains(step.task, 'TaskA') }}:
'TaskA is not allowed etc.' : error
It depends on how you define the task in the yaml. It's recommended to check the definition, and avoid duplication of the tasks directly.
Or you could put the task in main yaml not the template, use sample script here to validate the task in the template, if it exists, fail the pipeline.
I wonder is it possible that to create a runtime parameters that can be shared with other pipelines in azure-pipelines?
For instance, I have the following parameters in 1 pipeline (e.g. my-app):
parameters:
- name: Environment
default: DEV
values:
- DEV
- UAT
- name: Branch
displayName: Check Out Branch
type: string
default: main
These parameters can be shared into another pipeline (e.g. my-svc).
Will that be possible?
This is not possible. You could do that by having input parameters also in your second pipeline and call the second pipeline from the first pipeline using the parameters provided on your first pipeline. There is not a shared thing for parameters on multiple pipelines.
But there is a shared library between multiples pipelines using variables. You can define variables on a Library that will be used across multiples pipelines. In order to do that you should refer/call the variable group on each pipeline.
I have a bunch of GitHub actions for services that each build a container if something changes in their corresponding directories. Then I have a final GitHub actions that sets up a cloud environment with Terraform and uses the containers created by the other actions to deploy the microservices and API for my project.
Currently I'm running that final action manually after all the other actions complete. However, I was wondering if there is any way to automate this. I realize I can chain actions but I'm unsure how to handle this since any number of actions might be running and might finish in any order.
You can turn each GitHub actions you have into a composite action for more reusability and readability.
But you can just combine all existing actions into 1 workflow - each in separate job if they need different runners.
Then you can set dependencies between those jobs them using needs syntax.
At the end when they all complete, you can have 1 job that has needs for each of those combined and check if they succeeded and based on that execute what you need.
As an example:
jobs:
job1:
job2:
needs: job1 //if you want to queue jobs then set dependency, if you want to run job1, job2 in parallel, remove this dependency
final_action:
needs: [job1, job2]
if: ${{ needs.job1.result == 'success' && needs.job2.result == 'success' }}
run: //PUBLISH STUFF
notify_error:
needs: [job1, job2]
if: ${{ always() && !cancelled() && (needs.job1.result != 'success' || needs.job2.result != 'success') }}
run: //NOTIFY FAILURE
In one workflow? You can run all actions in parallel in their own job. Then use needs: for the final job to depend on all the other jobs and deploy the application in that job.
I have two repos on my Azure DevOps project. One for the Cloud Infrastructure deployment and another that contains my application code.
I have a YAML pipeline that is triggered after any of those repos build pipeline finishes. The pipeline looks a bit like this like this:
resources:
pipelines:
- pipeline: MyProject-Code
- pipeline: MyProject-Infrastructure
jobs:
- job: DeployInfrastructure
steps:
# Here are the tasks the deploy the project infrastructure
- job: DeployCode
steps:
# Here are the tasks that deploy the code
I would like to put a condition on the DeployInfrastructure job so it is only executed if the triggering pipeline is the infrastructure one as I do not need to redeploy it if the change only affects the application code.
However, when reading the documentation from Microsoft there does not seem to be a very straightforward way of doing this.
Have a look at Pipeline resource variables
In each run, the metadata for a pipeline resource is available to all
jobs in the form of predefined variables. The is the
identifier that you gave for your pipeline resource. Pipeline
resources variables are only available at runtime.
There are also a number of predefined variables called Build.TriggeredBy.*, amongst them Build.TriggeredBy.DefinitionName, however documentation suggests that for yaml pipeline with pipeline triggers the resource variables should be used instead
If the build was triggered by another build, then this variable is set
to the name of the triggering build pipeline. In Classic pipelines,
this variable is triggered by a build completion trigger.
This variable is agent-scoped, and can be used as an environment
variable in a script and as a parameter in a build task, but not as
part of the build number or as a version control tag.
If you are triggering a YAML pipeline using resources, you should use
the resources variables instead.
I have a workflow for CI in a monorepo, for this workflow two projects end up being built. The jobs run fine, however, I'm wondering if there is a way to remove the duplication in this workflow.yml file with the setting up of the runner for the job. I have them split so they run in parallel as they do not rely on one another and to be faster to complete. It's a big time difference in 5 minutes vs. 10+ when waiting for the CI to finish.
jobs:
job1:
name: PT.W Build
runs-on: macos-latest
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build PT.W Android
run: |
cd apps/wear/pushtracker
tns build android --env.uglify
job2:
name: SD.W Build
runs-on: macos-latest
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build SD.W Android
run: |
cd apps/wear/smartdrive
tns build android --env.uglify
You can see here the jobs have almost an identical process, it's just the building of the different apps themselves. I'm wondering if there is a way to take the duplicate blocks in the jobs and create a way to only write that once and reuse it in both jobs.
There are 3 main approaches for code reusing in GitHub Actions:
Reusable Workflows
Dispatched workflows
Composite Actions <-- it's the best one in your case
The following details are from my article describing their pros and cons:
🔸 Reusing workflows
The obvious option is using the "Reusable workflows" feature that allows you to extract some steps into a separate "reusable" workflow and call this workflow as a job in other workflows.
🥡 Takeaways:
Nested reusable workflow calls are allowed (up to 4 levels) while loops are not permitted.
Env variables are not inherited. Secrets can be inherited by using special secrets: inherit job param.
It's not convenient if you need to extract and reuse several steps inside one job.
Since it runs as a separate job, you have to use build artifacts to share files between a reusable workflow and your main workflow.
You can call a reusable workflow in synchronous or asynchronous manner (managing it by jobs ordering using needs keys).
A reusable workflow can define outputs that extract outputs/outcomes from executed steps. They can be easily used to pass data to the "main" workflow.
🔸 Dispatched workflows
Another possibility that GitHub gives us is workflow_dispatch event that can trigger a workflow run. Simply put, you can trigger a workflow manually or through GitHub API and provide its inputs.
There are actions available on the Marketplace which allow you to trigger a "dispatched" workflow as a step of "main" workflow.
Some of them also allow doing it in a synchronous manner (wait until dispatched workflow is finished). It is worth to say that this feature is implemented by polling statuses of repo workflows which is not very reliable, especially in a concurrent environment. Also, it is bounded by GitHub API usage limits and therefore has a delay in finding out a status of dispatched workflow.
🥡 Takeaways
You can have multiple nested calls, triggering a workflow from another triggered workflow. If done careless, can lead to an infinite loop.
You need a special token with "workflows" permission; your usual secrets.GITHUB_TOKEN doesn't allow you to dispatch a workflow.
You can trigger multiple dispatched workflows inside one job.
There is no easy way to get some data back from dispatched workflows to the main one.
Works better in "fire and forget" scenario. Waiting for a finish of dispatched workflow has some limitations.
You can observe dispatched workflows runs and cancel them manually.
🔸 Composite Actions
In this approach we extract steps to a distinct composite action, that can be located in the same or separate repository.
From your "main" workflow it looks like a usual action (a single step), but internally it consists of multiple steps each of which can call own actions.
🥡 Takeaways:
Supports nesting: each step of a composite action can use another composite action.
Bad visualisation of internal steps run: in the "main" workflow it's displayed as a usual step run. In raw logs you can find details of internal steps execution, but it doesn't look very friendly.
Shares environment variables with a parent job, but doesn't share secrets, which should be passed explicitly via inputs.
Supports inputs and outputs. Outputs are prepared from outputs/outcomes of internal steps and can be easily used to pass data from composite action to the "main" workflow.
A composite action runs inside the job of the "main" workflow. Since they share a common file system, there is no need to use build artifacts to transfer files from the composite action to the "main" workflow.
You can't use continue-on-error option inside a composite action.
As I know currently there is no way to reuse steps
but in this case, you can use strategy for parallel build and different variation:
jobs:
build:
name: Build
runs-on: macos-latest
strategy:
matrix:
build-dir: ['apps/wear/pushtracker', 'apps/wear/smartdrive']
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build Android
run: |
cd ${{ matrix.build-dir }}
tns build android --env.uglify
For more information please visit https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategy
Since Oct. 2021, "Reusable workflows are generally available"
Reusable workflows are now generally available.
Reusable workflows help you reduce duplication by enabling you to reuse an entire workflow as if it were an action. A number of improvements have been made since the beta was released in October:
You can utilize outputs to pass data from reusable workflows to other jobs in the caller workflow
You can pass environment secrets to reusable workflows
The audit log includes information about which reusable workflows are used
See "Reusing workflows" for more.
A workflow that uses another workflow is referred to as a "caller" workflow.
The reusable workflow is a "called" workflow.
One caller workflow can use multiple called workflows.
Each called workflow is referenced in a single line.
The result is that the caller workflow file may contain just a few lines of YAML, but may perform a large number of tasks when it's run. When you reuse a workflow, the entire called workflow is used, just as if it was part of the caller workflow.
Example:
In the reusable workflow, use the inputs and secrets keywords to define inputs or secrets that will be passed from a caller workflow.
# .github/actions/my-action.yml
# Note the special trigger 'on: workflow_call:'
on:
workflow_call:
inputs:
username:
required: true
type: string
secrets:
envPAT:
required: true
Reference the input or secret in the reusable workflow.
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
environment: production
steps:
- uses: ./.github/actions/my-action
with:
username: ${{ inputs.username }}
token: ${{ secrets.envPAT }}
With ./.github/actions/my-action the name of the my-action.yml file in your own repository.
A reusable workflow does not have to be in the same repository, and can be in another public one.
Davide Benvegnù aka CoderDave illustrates that in "Avoid Duplication! GitHub Actions Reusable Workflows" where:
n3wt0n/ActionsTest/.github/workflows/reusableWorkflowsUser.yml references
n3wt0n/ReusableWorkflow/.github/workflows/buildAndPublishDockerImage.yml#main