How to connect on premise kubernetes cluster using Jenkins File - kubernetes

I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.

You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".

Related

Jenkins cron job to run selenium & k8s

I am working on a project in which I have created a k8s cluster to run selenium grid locally. I want to schedule the tests to run and until now I have tried to create a Jenkins cron job to do so. For that I am using k8s plugin in Jenkins.
However I am not sure about the steps to follow. Where should I be uploading the kube config file? There are a few options here:
Build Environment in Jenkins
Any ideas or suggestions?
Thanks
Typically, you can choose any option, depending on how you want to manage the system, I believe:
secret text or file option will allow you to copy/paste a secret (with a token) in Jenkins which will be used to access the k8s cluster. Token based access works by adding an HTTP header to your requests to the k8s API server as follows: Authorization: Bearer $YOUR_TOKEN. This authenticates you to the server. This is the programmatic way to access the k8s API.
configure kubectl option will allow you to perhaps specify the config file within Jenkins UI where you can set the kubeconfig. This is the imperative/scriptive way of configuring access to the k8s API. The kubeconfig itself contains set of keypair based credentials that are issued to a username and signed by the API server's CA.
Any way would work fine! Hope this helps!
If Jenkins is running in Kubernetes as well, I'd create a service account, create the necessary Role and RoleBinding to only create CronJobs, and attach your service account to your Jenkins deployment or statefulset, then you can use the token of the service account (by default mounted under /var/run/secrets/kubernetes.io/serviceaccount/token) and query your API endpoint to create your CronJobs.
However, if Jenkins is running outside of your Kubernetes cluster, I'd authenticate against your cloud provider in Jenkins using one of the plugins available, using:
Service account (GCP)
Service principal (Azure)
AWS access and secret key or with an instance profile (AWS).
and then would run any of the CLI commands to generate a kubeconfig file:
gcloud container clusters get-credentials
az aks get-credentials
aws eks update-kubeconfig

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Best practice for sanity test a K8s cluster? (ideally all from command line)

I am new here, I tried to search for the topic before I post here, this may have been discussed before, please let me know before being to hash on me :)
In my project, after performing some changes on either the DevOps tool sets or infrastructures, we always do some manual sanity test, this normally includes:
Building a new image and update the helm chart
Push the image to Artifactory and perform a "helm update", and see it it runs.
I want to automate the whole thing, and try to get advice from the community, here's some requirement:
Validate Jenkins agent being able to talk to cluster ( I can do this with kubectl get all -n <some_namespace_jenkins_user_has_access_to)
Validate the cluster has access to Github (let's say I am using Argo CD to sync yamls)
Validate the cluster has access to Artifactory and able to pull image ( I don't want to build a new image with new tag and update helm chart, so that to force to cluster to pull new image)
All of the above can be done in command line (so that I can implement using Jenkins groovy)
Any suggestion is welcome.
Thanks guys
Your best bet is probably a combination of custom Jenkins scripts (i.e. running kubectl in Jenkins) and some in-cluster checks (e.g. using kuberhealthy).
So, when your Jenkins pipeline is triggered, it could do the following:
Check connectivity to the cluster
Build and push an image, etc.
Trigger in-cluster checks for testing if the cluster has access to GitHub and Artifactory, e.g. by launching a custom Job in the cluster, or creating a KuberhealthyCheck custom resource if you use kuberhealthy
During all this, the Jenkins pipeline writes the results of its test as metrics to a Pushgateway which is scraped by your Prometheus. The in-cluster checks also push their results as metrics to the Pushgateway, or expose them via kuberhealthy, if you decide to use it. In the end, you should have the results of all checks in the same Prometheus instance where you can react on them, e.g. creating Prometheus alerts or Grafana dashboards.

How to run script which start kubernetes cluster on azure devops

I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else
We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here

Service Fabric doesn't run a docker pull on deployment

I've setup VSTS to deploy an Service Fabric app with a Docker guest container. All goes well but Service Fabric doesn't download the latest version of my image, a docker pull doesn't seem to be performed.
I've added the 'Service Fabric PowerShell script' with a 'docker pull' command but this is then only run on one of the nodes.
Is there a way to run a powershell script/command during deployment, either in VSTS or Service Fabric, to run a command across all the nodes to do a docker pull?
Please use an explicit version tag. Don't rely on 'latest'. An easy way to do this in VSTS, in the task 'Push Services' add $(Build.BuildId) in the field Additional Image Tags to tag your image.
Next, you can use a tokenizer to replace the ServiceManifest.xml image tag value in your release pipeline. One of my favorites is this one.
to deploy docker containers to Service Fabric, you have to either provide a Docker Compose file or a Service Fabric Applicaiton Package with manifests.
For containers the Service Fabric hosting system controls the docker host on the nodes to run containers.
For VSTS deployments, there's a Service Fabric Deploy task and a Service Fabric Compose Deploy task for both paths.
Container quick starts for Service Fabric:
See her for Windows: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers
Her for Linux: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers-linux