We use Azure Devops process to do a deploy to our staging slot and then we do a straight swap between staging and production.
We note that through the portal interface we can direct a percent of the traffic to staging, is there a method to do this through Azure Devops/CD process so that we direct a percent of traffic to a specific slot and then gradually increase, essentially a canary deployment method via the pipeline using Azure Web App Slots?
Thank you in advance.
Looking at the documentation it looks that is possible:
Next to the Azure portal, you can also use the az webapp traffic-routing set command in the Azure CLI to set the routing
percentages from CI/CD tools like DevOps pipelines or other automation
systems.
you can use this command to increase traffic:
az webapp traffic-routing set --distribution staging=50 --name MyWebApp --resource-group MyResourceGroup
and with Start-Sleep -Seconds 10 you can gradually increase percentage.
Related
I was wondering if there is a way to know, see and confirm that deploying recourses using Azure CLI files in Devops CI/CD pipelines deploy the defined resources and options incrementally?
I know doing so using Arm templates can be set as an option in the pipeline task:
But I haven't found anything similar in regard to Azure CLI deployment.
You can set the deployment mode in azure cli with the --mode parameter.
For example for Complete deployment mode:
az deployment group create \
--mode Complete \
--name ExampleDeployment \
--resource-group ExampleResourceGroup \
--template-file storage.json
See documentation for details.
I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else
We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here
We have containerized our .Net Core Web API using Docker and deploying to OpenShift
Below is the process we are following
Docker Image created
Image pushed to jfrog Artifactory from Azure Devops Build Pipeline
We have around 40 OpenShift Clusters where we have to deploy our App
Created Generic YAML template(references docker image with Tag_Name as Variable).
Created the template using OC commandline in all sites.
Now using Azure Devops Release pipeline we have created around 40 stages.
Each stage contains below tasks to trigger the templates
oc project $(project_name)
oc process template_name TAG_NAME="new_tag" | oc apply -f -
Problem:
If I need to deploy a new version of Docker Image(tag), running all the stages
takes lot of time. Is there any better way to run these stages parallel ?
With IIS deployments we can make use of deployment groups, but here in this case is there
a better way of doing it ?
I know that there is a way that if an image is pushed to artifactory all the clusters can
automatically pull that image, but I do not want in this way, I need more control over to
which set of clusters I am deploying.
I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".
We have a NodeJS Cloud Foundry application with a DevOps Delivery Pipeline enabled.
Initially we set up the pipeline to use the Active Deploy extension to the Delivery Pipeline in Bluemix to deploy app updates without any downtime. Also called: Rolling deployments, Blue-Green deployments, Red-Black deployments.
https://www.ibm.com/developerworks/cloud/library/cl-bluemix-rollingpipeline/cl-bluemix-rollingpipeline-pdf.pdf
Unfortunately, the Active Deploy service was retired swiftly as of June 23, 2017 as we are inhabiting downtime upon deployment.
https://www.ibm.com/blogs/bluemix/2017/05/retirement-ibm-active-deploy-beta-service/
How do we go back to the process by which a new version of an application is deployed into an environment with no disruption in service for the consumer? UrbanCode? Other options?
A good way of doing this is to use the IBM Cloud Garage's blue-green-deploy cf plugin. In your deploy script, add:
cf add-plugin-repo CF-Community https://plugins.cloudfoundry.org
cf install-plugin blue-green-deploy -f -r CF-Community
Then, instead of doing cf push <app_name>, do:
cf blue-green-deploy <app_name>
You can also specify a manifest, or specify a smoke test (if the smoke test fails, the build will be marked as failed and the original version will continue running).