How to run script which start kubernetes cluster on azure devops - kubernetes

I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else

We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Azure Devops OpenShift parallel release

We have containerized our .Net Core Web API using Docker and deploying to OpenShift
Below is the process we are following
Docker Image created
Image pushed to jfrog Artifactory from Azure Devops Build Pipeline
We have around 40 OpenShift Clusters where we have to deploy our App
Created Generic YAML template(references docker image with Tag_Name as Variable).
Created the template using OC commandline in all sites.
Now using Azure Devops Release pipeline we have created around 40 stages.
Each stage contains below tasks to trigger the templates
oc project $(project_name)
oc process template_name TAG_NAME="new_tag" | oc apply -f -
Problem:
If I need to deploy a new version of Docker Image(tag), running all the stages
takes lot of time. Is there any better way to run these stages parallel ?
With IIS deployments we can make use of deployment groups, but here in this case is there
a better way of doing it ?
I know that there is a way that if an image is pushed to artifactory all the clusters can
automatically pull that image, but I do not want in this way, I need more control over to
which set of clusters I am deploying.

Application deployment over EKS using Jenkins

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins. How is the deployment files updated based on the change of the docker image. If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins.
Make sure that your Jenkins instance has an IAM Role and updated kubeconfig so that it can access the Kubernetes cluster. If you consider running the pipeline on the Kubernetes cluster, Jenkins X or Tekton Pipelines may be good alternatives that are better designed for Kubernetes.
How is the deployment files updated based on the change of the docker image.
It is a good practice to also keep the deployment manifest in version control, e.g. Git. This can be in the same repository or in a separate repository. For updating the image after a new image is built, consider using yq. An example yq command to update the image in a deployment manifest (one line):
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==<myapp>).image' \
<my-registy-host>/<my-image-repository>/<my-image-name>:<my-tag-name>
If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Nope, Kubernetes Yaml is declarative so it "understand" what is changed and only "drives" the necessary deployments to its "desired state" - since the other deployments already are in its "desired state".

How can I restart only one service by using skaffold?

I use skaffold for k8s based microservices app. I enter skaffold dev and skaffold run to run and skaffold delete to restart all microservices.
If I need to restart only one service, what must I do?
According to the docs:
Use skaffold dev to build and deploy your app every time your code changes,
Use skaffold run to build and deploy your app once, similar to a CI/CD pipeline
1. Deploy your services:
skaffold run --filename=skaffold_test_1.yaml
(in addition you can have multiple workflow configurations).
2. Change your skaffold workflow configuration and run:
skaffold delete --filename=skaffold_test2.yaml
Using this approach your deployments will be not removed like in the skaffold dev command after stopping skaffold.
Basically managing the content of the skaffold workflow configuration (by adding or removing additional entries allows you to deploy or remove particular service).
apiVersion: skaffold/v1
kind: Config
.
.
.
deploy:
kubectl:
manifests:
- k8s-service1.yaml
# - k8s-service2.yaml
You can use skaffold dev's --watch-image flag to restrict the artifacts to monitor. This takes a comma-separated list of images, specified by the artifact.image.

How to connect on premise kubernetes cluster using Jenkins File

I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".