Suppose i have three workflows: build_backend, build_frontend and deploy. First two should trigger in parallel, but the third should only trigger when both of those workflows are finished.
Currently the deploy workflow triggers twice -- i suspect that's for each of the two workflows completed.
# .github/workflows/build-xxx.yml
name: Build and Test - Backend
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# ...
# .github/workflows/deploy.yml
name: Deploy
on:
workflow_run:
workflows:
- "Build and Test - Backend"
- "Build and Test - Frontend"
types:
- completed
branch: master
jobs:
deploy:
# ...
I haven't found the solution in docs:
https://docs.github.com/en/actions/reference/events-that-trigger-workflows#workflow_run
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions
GitHub actions does not support specific trigger definitions like this for whole workflows.
However, you can use the needs keyword on a job level. So if you could consolidate all of these workflows into one workflow file. It seems like like this could work for you since these workflows all have the same (branch) trigger and the buildxxx are only a single job each.
There is also a GitHub Roadmap item describing, that they are working on adding workflow partials. That would enable you to separte these parts out in the future if you want to, but that is not avilable yet it seems like.
Related
In Azure DevOps, I have two Git repositories in the MyProject project:
MyProject.Web is the .NET Core server-side code for an Angular application.
MyProject.UI is the associated Angular project.
Pull requests to specified branches in both of those projects trigger a build pipeline. The pipeline YAML files are in the MyProject.Web repo. Successful completion of the build pipeline triggers a deployment pipeline. The build is working fine; the deployment is fine when a MyProject.Web branch is the trigger, but it isn't always doing what's expected when a MyProject.UI branch is the trigger.
Details
We currently have three deployment environments: Dev, QA, Test.
I have a build pipeline that pulls the code from each of these repos together and builds the application (into a Docker image, for what it's worth).
That pipeline is triggered by completed pull requests to any master or main or release/* branch in either the Web or UI repo. (In the Web project, it's master;
in the UI project, it's main). Whether the trigger comes from the Web repo or the UI repo, the pipeline knows to combine it with the corresponding branch
from the other repo (i.e., master with main, main with master, release/1.0 with release/1.0, release/1.2 with release/1.2, etc.).
I have a deployment pipeline that is triggered by completion of the build pipeline. It has one stage per deployment environment It is supposed to know whether the source branch was master or main or whether it was a release/* branch.
If it was master or main, it should deploy to Dev and, after manager approval, to QA. It should never deploy to Test.
If it was a release/* branch, it should deploy, after manager approval, to Test. It should never deploy to Dev or QA.
Expectations in Each Triggering Scenario
Here is a summary of my expectations/requirements for each possible type of build trigger, and the reality in the deployment in the fourth scenario.
TriggeringProject
TriggeringBranch
MyProject.Web Branchto Build
MyProject.UIBranchto Build
MyProject.WebBranch toDeploy From
Environment(s) toDeploy To
MyProject.Web
master
master
main
master
Dev, QA
MyProject.Web
release/1.0
release/1.0
release/1.0
release/1.0
Test
MyProject.UI
main
master
main
master
Dev, QA
MyProject.UI
release/1.0
release/1.0
release/1.0
release/1.0 ❌(reality: seems to pullfrom master)
Test ❌(reality: deploysto Dev, QA)
The Problem
My setup performs as expected, as set out in the table, except for the deployment process for the final case. While I can tell from the DevOps pipeline logs that the builds are
correctly choosing the branches to get the code from in all four scenarios, the deployment in the final scenario seems to be taking a build associated
with the MyProject.Web master branch (the log shows that the pipeline was run against master) and it is deploying whatever build it's finding on master to Dev and QA instead of to Test.
Any thoughts about what's going on here? Guidance for a solution? Details below.
The Pipeline Files
Three pipeline files come into play here:
docker-build.yml, the YAML file behind a pipeline named Docker-Build.
It resides in MyProject.Web.
It is triggered by completed pull requests to master or release/* branches on MyProject.Web.
It is also triggered by completed PRs to main or release/* branches on MyProject.UI.
After setting things up and setting template parameter values, the meat of its work is performed by an invoked pipeline template, but there's no need to go into that here.
kubernetes-deploy.yml, the YAML file behind a pipeline named Kubernetes-Deploy.
It resides in MyProject.Web.
It is triggered by successful completion of Docker-Build.
After setting things up and setting one template parameter value (the pipeline ID of the associated Docker-Build pipeline), it invokes the pipeline template deploy-pipeline-template.yml, discussed next.
deploy-pipeline-template.yml, a pipeline template.
It is shared by multiple applications/repos in this same project.
It resides in a separate repository called MyProject.Pipeline.
The content of these files, with extraneous omissions noted:
docker-build.yml:
# This is the YAML for the "Docker-Build" pipeline. It resides in the MyProject.Web repository.
resources:
repositories:
- repository: self
type: git
name: MyProject.Web
trigger:
- master
- release/*
- repository: UiRepo
type: git
name: MyProject.UI
trigger:
- main
- release/*
- repository: PipelineRepo
type: git
name: MyProject.Pipeline
# [variables and pool omitted]
steps:
- ${{ if in(variables['Build.SourceBranchName'], 'master', 'main') }}:
- checkout: git://MyProject/MyProject.Web#refs/heads/master
- checkout: git://MyProject/MyProject.UI#refs/heads/main
- ${{ if not(in(variables['Build.SourceBranchName'], 'master', 'main')) }}:
- checkout: git://MyProject/MyProject.Web#${{ variables['Build.SourceBranch'] }}
- checkout: git://MyProject/MyProject.UI#${{ variables['Build.SourceBranch'] }}
# [some details omitted]
- template: build-pipeline-template.yml#PipelineRepo
parameters:
relativeSolutionPath: MyProject.Web
relativeProjectPath: MyProject.Web/MyProject.Web
# [other parameters omitted]
kubernetes-deploy.yml:
repositories:
- repository: PipelineRepo
type: git
name: MyProject.Pipeline
pipelines:
- pipeline: buildPipeline
source: 'Docker-Build'
trigger: true
trigger:
- none
# [pool omitted]
stages:
- template: deploy-pipeline-template.yml#PipelineRepo
parameters:
buildPipelineId: '123'
# I can probably replace '123' with variables['resources.pipeline.buildPipeline.PipelineID'] or the
# same thing with another one of Azure DevOps' multitudinous syntaxes, but I haven't tested it yet.
build-pipeline-template.yml
# This is a pipeline template that resides in the MyProject.Pipeline repository.
parameters:
- name: buildPipelineId
displayName: ID of the pipeline that produced the artifacts to download
type: string
stages:
- template: deploy-pipeline-job-template.yml
parameters:
stageName: Development
canRun: and(not(or(failed(), canceled())), in(variables['resources.pipeline.buildPipeline.sourceBranch'], 'refs/heads/master', 'refs/heads/main'))
variableGroup: myproject-web-variables-dev
buildPipelineId: ${{ parameters.buildPipelineId }}
devOpsEnvironment: myproject-dev
kubernetesServiceConnection: myproject-dev-kubeconfig
- template: deploy-pipeline-job-template.yml
parameters:
stageName: QA
canRun: and(not(or(failed(), canceled())), in(variables['resources.pipeline.buildPipeline.sourceBranch'], 'refs/heads/master', 'refs/heads/main'))
variableGroup: myproject-web-variables-qa
buildPipelineId: ${{ parameters.buildPipelineId }}
devOpsEnvironment: myproject-qa
kubernetesServiceConnection: myproject-dev-kubeconfig
- template: deploy-pipeline-job-template.yml
parameters:
stageName: Test
canRun: and(not(or(failed(), canceled())), startsWith(variables['resources.pipeline.buildPipeline.sourceBranch'], 'refs/heads/release/'))
variableGroup: myproject-web-variables-test
buildPipelineId: ${{ parameters.buildPipelineId }}
devOpsEnvironment: myproject-test
kubernetesServiceConnection: myproject-dev-kubeconfig
I have 2 jobs in my workflow 'Plan and Apply". I want plan to run when there is pull request on my testing branch and Apply to run when there is a pull request on master branch. Below is the snippet of my code. This workflow doesn't run,I am getting message " This check was skipped" .What I'm i doing wrong?
on:
pull_request:
branches:
- testing
- master
jobs:
plan:
name: "Terraform Plan"
if: ${{ github.head_ref == 'testing'}}
Apply:
name: "Run Terraform Apply"
if: ${{ github.head_ref == 'master'}}
I'd recommend to instead of putting it all into one workflow file, to create two separate ones, where you then only specify the branch you want, for example:
`plan_testing.yml`
on:
pull_request:
branches:
- testing
jobs:
plan:
name: "Terraform Plan"
This achieves the same outcome while keeping them more easily separated and simpler to audit their run history in GitHub Actions.
You can find out more about GitHub Actions and how to configure them by checking out their awesome docs.
name: master builder
on:
push:
branches:
- master
~~~
I have a workflow like this. So, Whenever I push to the master branch, the actions run.
But I want the build to work only on the last push.
For example,
master branch - feature1 (person1)
master branch - feature2 (person2)
master branch - feature3 (person3)
In this structure, if features1,2,3 are merged at almost the same time, the build will run 3 times.
But I want the master branch to be built only on the last merge. Just once.
Is there anyway to do this? like.. run the build only once after waiting for about 1 minute when pushing.
This is a sample code that I proceeded in the way you answered. But I get an error "The key 'concurrency' is not allowed". What's wrong?
name: test
on:
push:
branches:
- feature/**
concurrency:
group: ${{ github.ref }}
cancel-in-progress: true
jobs:
~~~
You may try to achieve this with concurrency and cancel-in-progress: true
Concurrency ensures that only a single job or workflow using the same concurrency group will run at a time. A concurrency group can be any string or expression. The expression can only use the github context. For more information about expressions, see "Context and expression syntax for GitHub Actions."
You can also specify concurrency at the job level. For more information, see jobs.<job_id>.concurrency.
When a concurrent job or workflow is queued, if another job or workflow using the same concurrency group in the repository is in progress, the queued job or workflow will be pending. Any previously pending job or workflow in the concurrency group will be canceled. To also cancel any currently running job or workflow in the same concurrency group, specify cancel-in-progress: true.
However
Note: Concurrency is currently in beta and subject to change.
Here is an example workflow:
name: Deploy
on:
push:
branches:
- main
- production
paths-ignore:
- '**.md'
# Ensures that only one deploy task per branch/environment will run at a time.
concurrency:
group: environment-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Extract commit
run: |
echo "Sending commit $GITHUB_SHA for $GITHUB_REPOSITORY"
Context
I'm creating a CI/CD configuration for an application having this repository configuration (each repository in the same Organization and Project):
Frontend repository (r1)
API Service repository (r2)
Infrastructure As Code repo (r3)
Within the repository r3 there are the solution's Azure DevOps Pipelines, each one of them has been configured for Manual & Scheduled trigger on develop branch:
Frontend CI Pipeline p1
Backend CI Pipeline p2
Deployment Pipeline p3
The behavior I want is
Git commit on r1 repo
Pipeline p1 on repo r3 triggered (this will create artifacts, apply a tag and notify)
Pipeline p3 triggered by p1 completion (this will deploy the artifacts)
Pipeline p1 looks like the following
trigger: none
resources:
containers:
- container: running-image
image: ubuntu:latest
options: "-v /usr/bin/sudo:/usr/bin/sudo -v /usr/lib/sudo/libsudo_util.so.0:/usr/lib/sudo/libsudo_util.so.0 -v /usr/lib/sudo/sudoers.so:/usr/lib/sudo/sudoers.so -v /etc/sudoers:/etc/sudoers"
repositories:
- repository: frontend
name: r1
type: git
ref: develop
trigger:
branches:
include:
- develop
exclude:
- main
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r) - Frontend App [CI]
variables:
- name: imageName
value: fronted-app
- name: containerRegistryConnection
value: apps-registry-connection
pool:
vmImage: "ubuntu-latest"
stages:
- stage: Build
displayName: Build and push
jobs:
- job: JobBuild
displayName: Build job
container: running-image
steps:
- checkout: frontend
displayName: Checkout Frontend repository
path: fe
persistCredentials: true
...
Pipeline p3 looks like the following
name: $(SourceBranchName)_$(date:yyyyMMdd)$(rev:.r) - App [CD]
trigger: none
resources:
containers:
- container: running-image
image: ubuntu:latest
options: "-v /usr/bin/sudo:/usr/bin/sudo -v /usr/lib/sudo/libsudo_util.so.0:/usr/lib/sudo/libsudo_util.so.0 -v /usr/lib/sudo/sudoers.so:/usr/lib/sudo/sudoers.so -v /etc/sudoers:/etc/sudoers"
pipelines:
- pipeline: app-fe-delivery
source: "p1"
trigger:
stages:
- Build
branches:
include:
- develop
pool:
vmImage: "ubuntu-latest"
stages:
- stage: Delivery
jobs:
- job: JobDevelopment
steps:
- template: ../templates/template-setup.yaml # Template reference
parameters:
serviceDisplayName: ${{ variables.serviceDisplayName }}
serviceName: ${{ variables.serviceName }}
...
Issue
Even if followed step by step all the rules exposed in the official documentation:
Pipeline p1 is never triggered by any commit on develop branch in r1 repository
Even if manually run Pipeline p1, Pipeline p3 is never triggered
Remarks
As stated in the pipelines YAML reference, Triggers are enabled by default
in the same documentation, if no branch include filter is expressed, the trigger will happen on all branches
As stated in the triggers for Checkout Multiple repositories in pipelines triggers happens only for repos in Azure DevOps repositories
is it possible to disable pipeline CI triggers (trigger: none) and have resource's repositories triggers happening
Build agent user has been authorized to access and queue new builds
Couple possible solutions.
First off believe your issue is with:
trigger: none
This means the pipeline will only work manually. In the documentation you referenced :
Triggers are enabled by default on all the resources. However, you can choose to override/disable triggers for each resource.
The way this is configured all push triggers are disabled.
One possible way to achieve what you are attempting I believe is to remove the trigger:none from p1 and p3
If I read your question correctly you are trying to do a CI/CD build deployment on the repository. If so, may I suggest if the scenario is appropriate (i.e. a Build will always trigger a deployment) then combine these pipelines into one and put an if statement around the deployment stage similar to:
- ${{ if eq(variables['Build.SourceBranch'], 'refs/heads/master')}}:
Also, if deploying to multiple environments this can be followed up with a loop indented in one line:
- ${{ each environmentNames in parameters.environmentNames }}:
I noticed you are already using template so this would be just moving the template call up from the job to the stage and have it act as a wrapper. Feel free to provide feedback. If this answer isn't appropriate, I can update it accordingly.
I have a workflow for CI in a monorepo, for this workflow two projects end up being built. The jobs run fine, however, I'm wondering if there is a way to remove the duplication in this workflow.yml file with the setting up of the runner for the job. I have them split so they run in parallel as they do not rely on one another and to be faster to complete. It's a big time difference in 5 minutes vs. 10+ when waiting for the CI to finish.
jobs:
job1:
name: PT.W Build
runs-on: macos-latest
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build PT.W Android
run: |
cd apps/wear/pushtracker
tns build android --env.uglify
job2:
name: SD.W Build
runs-on: macos-latest
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build SD.W Android
run: |
cd apps/wear/smartdrive
tns build android --env.uglify
You can see here the jobs have almost an identical process, it's just the building of the different apps themselves. I'm wondering if there is a way to take the duplicate blocks in the jobs and create a way to only write that once and reuse it in both jobs.
There are 3 main approaches for code reusing in GitHub Actions:
Reusable Workflows
Dispatched workflows
Composite Actions <-- it's the best one in your case
The following details are from my article describing their pros and cons:
🔸 Reusing workflows
The obvious option is using the "Reusable workflows" feature that allows you to extract some steps into a separate "reusable" workflow and call this workflow as a job in other workflows.
🥡 Takeaways:
Nested reusable workflow calls are allowed (up to 4 levels) while loops are not permitted.
Env variables are not inherited. Secrets can be inherited by using special secrets: inherit job param.
It's not convenient if you need to extract and reuse several steps inside one job.
Since it runs as a separate job, you have to use build artifacts to share files between a reusable workflow and your main workflow.
You can call a reusable workflow in synchronous or asynchronous manner (managing it by jobs ordering using needs keys).
A reusable workflow can define outputs that extract outputs/outcomes from executed steps. They can be easily used to pass data to the "main" workflow.
🔸 Dispatched workflows
Another possibility that GitHub gives us is workflow_dispatch event that can trigger a workflow run. Simply put, you can trigger a workflow manually or through GitHub API and provide its inputs.
There are actions available on the Marketplace which allow you to trigger a "dispatched" workflow as a step of "main" workflow.
Some of them also allow doing it in a synchronous manner (wait until dispatched workflow is finished). It is worth to say that this feature is implemented by polling statuses of repo workflows which is not very reliable, especially in a concurrent environment. Also, it is bounded by GitHub API usage limits and therefore has a delay in finding out a status of dispatched workflow.
🥡 Takeaways
You can have multiple nested calls, triggering a workflow from another triggered workflow. If done careless, can lead to an infinite loop.
You need a special token with "workflows" permission; your usual secrets.GITHUB_TOKEN doesn't allow you to dispatch a workflow.
You can trigger multiple dispatched workflows inside one job.
There is no easy way to get some data back from dispatched workflows to the main one.
Works better in "fire and forget" scenario. Waiting for a finish of dispatched workflow has some limitations.
You can observe dispatched workflows runs and cancel them manually.
🔸 Composite Actions
In this approach we extract steps to a distinct composite action, that can be located in the same or separate repository.
From your "main" workflow it looks like a usual action (a single step), but internally it consists of multiple steps each of which can call own actions.
🥡 Takeaways:
Supports nesting: each step of a composite action can use another composite action.
Bad visualisation of internal steps run: in the "main" workflow it's displayed as a usual step run. In raw logs you can find details of internal steps execution, but it doesn't look very friendly.
Shares environment variables with a parent job, but doesn't share secrets, which should be passed explicitly via inputs.
Supports inputs and outputs. Outputs are prepared from outputs/outcomes of internal steps and can be easily used to pass data from composite action to the "main" workflow.
A composite action runs inside the job of the "main" workflow. Since they share a common file system, there is no need to use build artifacts to transfer files from the composite action to the "main" workflow.
You can't use continue-on-error option inside a composite action.
As I know currently there is no way to reuse steps
but in this case, you can use strategy for parallel build and different variation:
jobs:
build:
name: Build
runs-on: macos-latest
strategy:
matrix:
build-dir: ['apps/wear/pushtracker', 'apps/wear/smartdrive']
steps:
- name: Checkout Repo
uses: actions/checkout#v1
- name: Setup SSH-Agent
uses: webfactory/ssh-agent#v0.2.0
with:
ssh-private-key: |
${{ secrets.SSH_PRIVATE_KEY }}
- name: Setup JDK 1.8
uses: actions/setup-java#v1
with:
java-version: 1.8
- name: Setup Permobil-Client
run: |
echo no | npm i -g nativescript
tns usage-reporting disable
tns error-reporting disable
npm run setup.all
- name: Build Android
run: |
cd ${{ matrix.build-dir }}
tns build android --env.uglify
For more information please visit https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategy
Since Oct. 2021, "Reusable workflows are generally available"
Reusable workflows are now generally available.
Reusable workflows help you reduce duplication by enabling you to reuse an entire workflow as if it were an action. A number of improvements have been made since the beta was released in October:
You can utilize outputs to pass data from reusable workflows to other jobs in the caller workflow
You can pass environment secrets to reusable workflows
The audit log includes information about which reusable workflows are used
See "Reusing workflows" for more.
A workflow that uses another workflow is referred to as a "caller" workflow.
The reusable workflow is a "called" workflow.
One caller workflow can use multiple called workflows.
Each called workflow is referenced in a single line.
The result is that the caller workflow file may contain just a few lines of YAML, but may perform a large number of tasks when it's run. When you reuse a workflow, the entire called workflow is used, just as if it was part of the caller workflow.
Example:
In the reusable workflow, use the inputs and secrets keywords to define inputs or secrets that will be passed from a caller workflow.
# .github/actions/my-action.yml
# Note the special trigger 'on: workflow_call:'
on:
workflow_call:
inputs:
username:
required: true
type: string
secrets:
envPAT:
required: true
Reference the input or secret in the reusable workflow.
jobs:
reusable_workflow_job:
runs-on: ubuntu-latest
environment: production
steps:
- uses: ./.github/actions/my-action
with:
username: ${{ inputs.username }}
token: ${{ secrets.envPAT }}
With ./.github/actions/my-action the name of the my-action.yml file in your own repository.
A reusable workflow does not have to be in the same repository, and can be in another public one.
Davide Benvegnù aka CoderDave illustrates that in "Avoid Duplication! GitHub Actions Reusable Workflows" where:
n3wt0n/ActionsTest/.github/workflows/reusableWorkflowsUser.yml references
n3wt0n/ReusableWorkflow/.github/workflows/buildAndPublishDockerImage.yml#main