Trigger Gitlab Pipeline Jobs when Merge Succeeds - kubernetes

I have three K8s clusters; staging, sandbox, and production. I would like to:
Trigger a pipeline to build and deploy an image to staging, if a merge request to master is created
Upon a successful deploy of staging, I would like the branch to be merged into master
I would like to use the same image I already built in the build job before the staging deploy, to be used to deploy to sandbox and production
Something like this:
build:
... (stuff that builds and pushes "$CI_REGISTRY_IMAGE:$IMAGE_TAG")
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
staging:
...
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
sandbox:
...
?
production:
...
?
What I can't figure out is how to both have a successful MR at the end of the staging job and thereby have the pipeline merge the branch into master, and also then pass down whatever $CI_REGISTRY_IMAGE:$IMAGE_TAG was to continue with the jobs for the sandbox and production deploys.

Trigger a pipeline to build and deploy an image to staging, if a merge
request to master is created
For first you can create rules like
only:
- merge_requests
except:
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master"
You can run the curl command or hit API to approve the MR
https://gitlab.example.com/api/v4/projects/:id/merge_requests/:merge_request_iid/approve
Reference : https://stackoverflow.com/a/58036578/5525824
Document: https://docs.gitlab.com/ee/api/merge_requests.html#accept-mr
I would like to use the same image I already built in the build job
before the staging deploy, to be used to deploy to sandbox and
production
You can use the TAG_NAME: $CI_COMMIT_REF_NAME passing across the stages as Environment variable
You are making it really complicated ideally you can use the TAG and make it easy to manage and deploy using the CI.
Merge when MR gets merged and create TAG and build docker images with TAG name, deploy that same TAG across environment simple.

Related

How to schedule stage deployments in Azure DevOps Pipelines?

With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.

Azure DevOps - Pull Request Workflow and Deployments

I'm a bit confused with how to set this workflow up using pull requests.
I have in place an existing multi-stage YAML build pipeline that in summary does the following:
Runs a build from any branch
Runs a deployment job to a Development Environment/Resource, if the source branch is feature/* or release/* and the build succeeds
Runs a deployment job to a UAT Environment/Resource, if the source branch is release/*
Runs a deployment job to Live/Production Environment/Resource, if the source branch is master
So off the back of CI this workflow seems to work fine, the correct stages are run depending on the branch etc.
I then decided that branch policies and pull requests might be a better option for code quality rather than let any of the main branches be direct committed to, so I started to rework the YAML to account - mainly by removing the trigger to
trigger: none
This now works correctly in line with a branch policy, the build only gets kicked off when a pull request on to develop or master is opened.
However this is then where I'm a bit confused with how this is supposed to work and how I think it works ....
Firstly - is it not possible to trigger the multi-stage YAML off the back of pull requests (using Azure Repos) ? In my head all I want to do is introduce the pull request and branch policies but keep the multi-stage deployments to environments as is.
However, the deployment job stages all get skipped now - but this might be to do with my conditions within the YAML, which are as follows:
- stage: 'Dev'
displayName: 'Development Deployment'
dependsOn: 'Build'
condition: |
and
(
succeeded()
eq(variables['Build.Reason'], 'PullRequest'),
ne(variables['System.PullRequest.PullRequestId'], 'Null'),
or
(
startsWith(variables['Build.SourceBranch'], 'refs/heads/feature/'),
startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')
)
)
jobs:
- deployment: Deploy
pool:
name: 'Development Server Agent Pool'
variables:
Parameters.WebsitePhysicalPath: '%SystemDrive%\inetpub\wwwroot\App'
Parameters.VirtualPathForApplication: ''
Parameters.VirtualApplication: ''
environment: 'Development.Resource-Name'
.....
Is there something I am missing?
Or do I have to remove the multi-stage deployments from YAML and revert back to using Release Pipelines for pull requests (maybe with approval gates??)
Thanks in advance!
It looks like the issue of the conditions within the YAML.
If the pipeline is triggered by a PR. the value variables['Build.SourceBranch'] will be refs/pull/<PR id>/merge. The express in above condtion startsWith(variables['Build.SourceBranch'], 'refs/heads/feature/') will be false, which caused the stage to be skipped. See build variables for more information.
You can try using variables['System.PullRequest.SourceBranch'], which will be evaluated to the value of the source Branch of the PR. Check System variables for more information. See below:
condition: |
and
(
succeeded(),
eq(variables['Build.Reason'], 'PullRequest'),
ne(variables['System.PullRequest.PullRequestId'], ''),
or
(
startsWith(variables['System.PullRequest.SourceBranch'], 'refs/heads/feature/'),
startsWith(variables['System.PullRequest.SourceBranch'], 'refs/heads/release/')
)
)

Specify order of pipelines and dependencies

I'm having a hard time getting a grasp on this to be honest.
Right now my lab project is as follows:
PR to master -> Triggers Pre-Build Pipeline as condition to merge the code ->
On merge Infrastructure pipe runs only if any changes happen in my Infrastructure folder ->
On merge I want to run my deploy pipeline to deploy my web app to Azure.
The pipes in question do the things they ought to, i.e.
Pre build builds, publishes artifact, runs Unit tests, validates ARM templates.
Infra pipe deploys the necessary infra for my web app such as ResourceGroup, App plan, app service, key vault.
Deploy Pipe downloads the artifact produced in pre deploy and deploys to a stage slot and swaps it to production slot.
What I can't seem to get to work is the pipeline chaining through dependencies, if changes happen to both infra and web app code in master I want the infra pipe to run first and the deploy pipe only if it succeeds.
If I merge only app code I want only the deploy pipe to run regardless if the infra pipe ran or not.
If I merge only infra code I want only the infra pipe to run.
If I merge both app and infra code I want both infra and deploy pipe to run in specific order.
I feel this shouldn't be all that hard to accomplish, but I've spent way too much time trying to solve this to no avail, anyone able to help? :)
Edit:
Hey Sorry #HughLin-MSFT Been Trying to work around this a bit since we're trying to avoid running scripts left and right. :)
I saw you have Build Queuing planned in an upcoming release so for now I think we might have to wait for that.
If I were to merge my deploy and infra pipe, can I use:
trigger:
branches:
include:
- master
paths:
include:
- Infrastructure/*
At stage level and somehow skip a stage instead?
Seen multiple articles mention "Continue if skipped" but can't find any information on how to actually skip a stage.
For the first and second cases, you just need to set Path filters in Triggers, the pipeline only triggers when the file at the specified path is changed. Please refer to this.
For the third case, you can try to add two agent jobs in the infra pipe, add Trigger Azure DevOps Pipeline task to the second agent job to trigger the deploy pipe, and then set Only when all previous jobs have succeeded in Run this job drop-down box for job2. In addition, you need to add a powershell task before the Trigger Azure DevOps Pipeline task, and use a script to detect whether there is app code, run job2 if there is, and cancel job2 if not.
Update:
First you can create a new pipeline and create a variable:changedcode
Use Builds - Get rest api to get the commit , then get the changed code folder with Commits - Get Changes rest api.
Assign changed code folder name as value to changedcode variable.
Set custom conditions for the agent job. In the Infra job, if the changedcode variable value is Infra, run the Infra job. In the Infra job, use the Builds-Queue rest api or Trigger Azure DevOps Pipeline task to trigger the Infra pipeline. The same is true for Deploy job, the only difference is the custom condition expression.
Here is a sample structure in yaml:
jobs:
variables:
changedcode: ""
- job:
steps:
- powershell: |
#Get the changed code folder with rest api
- job: Infra
condition: containsValue($(changedcode), "Infra"))
- powershell: |
#queue Infra pipeline with rest api or Trigger Azure DevOps Pipeline task
- job: Deploy
condition: (containsValue($(changedcode), "deploy")),and ....
- powershell: |
#queue Deploy pipeline with rest api or Trigger Azure DevOps Pipeline task

How to write CI/CD pipeline to run integration testing of java micro services on Google kubernetes cluster?

Background:
I have 8-9 private clusterIP spring based microservices in a GKE cluster. All of the microservices are having integration tests bundled with them. I am using bitbucket and using maven as build tool.
All of the microservices are talking to each other via rest call with url: http://:8080/rest/api/fetch
Requirement: I have testing enviroment ready with all the docker images up on GKE Test cluster. I want that as soon as I merge the code to master for service-A, pipeline should deploy image to tes-env and run integration test cases. If test cases passes, it should deploy to QA-environment, otherwise rollback the image of service-A back to previous one.
Issue: On every code merge to master, I am able to run JUNIT test cases of service-A, build its docker image, push it on GCR and deploy it on test-env cluster. But how can I trigger integration test cases after the deployment and rollback to previously deployed image back if integration test cases fails? Is there any way?
TIA
You can create different steps for each part:
pipelines:
branches:
BRANCH_NAME:
- step:
script:
- BUILD
- step:
script:
- DEPLOY
- step:
script:
- First set of JUNIT test
- step:
script:
- Run Integration Tests (Here you can add if you fail to do rollback)
script:
- Upload to QA
There are many ways you can do it. From the above information its not clear which build tool you are using.
Lets say if you are using bamboo you can create a task for the same and include it in the SDLC process. Mostly the task can have bamboo script or ansible script.
You could also create a separate shell script to run the integration test suite after deployment.
You should probably check what Tekton is offering.
The Tekton Pipelines project provides k8s-style resources for declaring CI/CD-style pipelines.
If you use Gitlab CICD you can break the stages as follows:
stages:
- compile
- build
- test
- push
- review
- deploy
where you should compile the code in the first stage, then build the docker images from it in the next and then pull images and run them to do all your tests (including the integration tests)
here is the mockup of how it will look like:
compile-stage:
stage: compile
script:
- echo 'Compiling Application'
# - bash my compile script
# Compile artifacts can be used in the build stage.
artifacts:
paths:
- out/dist/dir
expire_in: 1 week
build-stage:
stage: build
script:
- docker build . -t "${CI_REGISTRY_IMAGE}:testversion" ## Dockerfile should make use of out/dist/dir
- docker push "${CI_REGISTRY_IMAGE}:testversion"
test-stage1:
stage: test
script:
- docker run -it ${CI_REGISTRY_IMAGE}:testversion bash unit_test.sh
test-stage2:
stage: test
script:
- docker run -d ${CI_REGISTRY_IMAGE}:testversion
- ./integeration_test.sh
## You will only push the latest image if the build will pass all the tests.
push-stage:
stage: push
script:
- docker pull ${CI_REGISTRY_IMAGE}:testversion
- docker tag ${CI_REGISTRY_IMAGE}:testversion -t ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:latest
## An app will be deployed on staging if it has passed all the tests.
## The concept of CICD is generally that you should do all the automated tests before even deploying on staging. Staging can be used for User Acceptance and Quality Assurance Tests etc.
deploy-staging:
stage: review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
on_stop: stop_review
only:
- branches
script:
- kubectl apply -f deployments.yml
## The Deployment on production environment will be manual and only when there is a version tag committed.
deploy-production:
stage: deploy
environment:
name: prod
url: https://$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
only:
- tags
script:
- kubectl apply -f deployments.yml
when:
- manual
I hope the above snippet will help you. If you want to learn more about deploying microservices using gitlab cicd on GKE read this

Travis CI: How to conditionally run provider deployment jobs?

I have a travis script deploying to different S3 buckets based on 2 conditions:
1. the branch name
2. the $TRAVIS_BRANCH env variable
... travis stuff
deploy:
- provider: s3
... other config
bucket: my-staging-bucket
on:
repo: MyOrg/my-repo
branch: staging
condition: $TRAVIS_BRANCH = staging
- provider: s3
... other config
bucket: my-prod-bucket
on:
repo: MyOrg/my-repo
branch: production
condition: $TRAVIS_BRANCH = production
It's working as expected:
When I deploy to staging, the first config successfully builds and deploys and I'm given appropriate messaging in Travis' job log.
It also tries to deploy to production and is stopped by the on: conditions, again providing messaging that indicates as much. The resulting log messages look like so, the first two lines indicating successful depoyment to staging and no deployment to production.
-Preparing deploy
-Deploying application
-Skipping a deployment with the s3 provider because a custom condition was not met
This is consistent when the situation is reversed:
-Skipping a deployment with the s3 provider because this branch is not permitted: production
-Skipping a deployment with the s3 provider because a custom condition was not met
...
-Preparing deploy
-Deploying application
This has lead to some confusion amonst the team as the messaging appears to be a false negative, indicating the deployment failed when it's actually functioning as intended. What I would like do is set up Travis so that it only runs the deployment script approprite for that branch and env variable combo.
Is there a way to do that? I was under the impression this was the method for conditional deployment.
If there's no way to prevent both deploy jobs from running, is there a way to at suppress the messaging in the job log?
The best way to do this would be to use Travis' stages and jobs features. Stages are groups of jobs. Jobs inside stages run in parallel. Stages run in sequence, one after the other. Entire stages can be conditional, and stages can also contain conditional jobs. Jobs in a stage can be deploy jobs too (i.e. the entire deploy: in your travis.yml can be nested inside a conditional stage. Most importantly for your goals, conditional stages and their included jobs are silently skipped if the condition is not met.
This is very different to the standard deploy: matrix that you already have. i.e. your current deploy step contains 2 deployments and so you get the message that it is skipping a deployment.
Instead, you can change that into separate deploy stages with conditional jobs.
The downside to using stages like this is that each stage runs in its own VM and so you can't share data from one stage to the next. (i.e build artifacts from previous stages do not propagate to subsequent stages). You can get around this by sharing the build results of a lengthy compile stage via S3, for example.
More information can be found here:
https://docs.travis-ci.com/user/build-stages
I have a working example here in my github: https://github.com/brianonn/travis-test
jobs:
include:
- stage: compile
script: bash scripts/compile.sh
- stage: test
script: bash scripts/test.sh
- stage: deploy-staging
if: branch = staging
name: "Deploy to staging S3"
script: skip
deploy:
provider: script
script: bash scripts/deploy.sh staging
on:
branch: staging
condition: $TRAVIS_BRANCH = staging
- stage: deploy-prod
if: branch = production
name: "Deploy to production S3"
script: skip
deploy:
provider: script
script: bash scripts/deploy.sh production
on:
branch: production
condition: $TRAVIS_BRANCH = production
This produces a Travis job log that is specific to each one of staging and production: