Application deployment over EKS using Jenkins - kubernetes

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins. How is the deployment files updated based on the change of the docker image. If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins.
Make sure that your Jenkins instance has an IAM Role and updated kubeconfig so that it can access the Kubernetes cluster. If you consider running the pipeline on the Kubernetes cluster, Jenkins X or Tekton Pipelines may be good alternatives that are better designed for Kubernetes.
How is the deployment files updated based on the change of the docker image.
It is a good practice to also keep the deployment manifest in version control, e.g. Git. This can be in the same repository or in a separate repository. For updating the image after a new image is built, consider using yq. An example yq command to update the image in a deployment manifest (one line):
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==<myapp>).image' \
<my-registy-host>/<my-image-repository>/<my-image-name>:<my-tag-name>
If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Nope, Kubernetes Yaml is declarative so it "understand" what is changed and only "drives" the necessary deployments to its "desired state" - since the other deployments already are in its "desired state".

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Kubernetes single deployment yaml file for spinning up the application

I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Dockerhub registry Image accessing from Helm Chart using deployment YAML file

I am trying to implement the CI/CD pipeline for my microservice by using Jenkins, Kubernetes and Kubernetes Helm. Here I am using Helm chart for packaging of YAML files and deployment into Kubernetes cluster. I am now learning the implementation of Helm chart and deployment. When I am learning, I found the image name definition in deployment YAML file.
I have two questions:
If we only defining the image name, then it will automatically pull from Docker Hub? Or do we need to define additionally anything in the deployment chart YAML file for pulling?
How the Helm Tiller communicating with Docker Hub registry?
Docker image names in Kubernetes manifests follow the same rules as everywhere else. If you have an image name like postgres:9.6 or myname/myimage:foo, those will be looked up on Docker Hub like normal. If you're using a third-party repository (Google GCR, Amazon ECR, quay.io, ...) you need to include the repository name in the image name. It's the exact same string you'd give to docker run or docker build -t.
Helm doesn't directly talk to the Docker registry. The Helm flow here is:
The local Helm client sends the chart to the Helm Tiller.
Tiller applies any templating in the chart, and sends it to the Kubernetes API.
This creates a Deployment object with an embedded Pod spec.
Kubernetes creates Pods from the Deployment, which have image name references.
So if your Helm chart names an image that doesn't exist, all of this flow will run normally, until it creates Pods that wind up in ImagePullBackOff state.
P.S.: if you're not already doing this, you should make the image tag (the part after the colon) configurable in your Helm chart, and declare your image name as something like myregistry.io/myname/myimage:{{ .Values.tag }}. Your CD system can then give each build a distinct tag and pass it into helm install. This makes it possible to roll back fairly seamlessly.
Run the command below. It will generate blank chart with values.yaml, add key value pare inside values.yaml and use them in your deployment.yaml file as variable.
helm create mychart