About CI/CD for kubernetes - kubernetes

I am using kubernetes.
I can do docker builds from GitHub and upload them to docker hub on our own.
However, I would like to automate the creation and updating of pods.
How about Circle CI for example?
Or is it possible to use the k8s library to update the pods?

You can use ArgoCD image updator
The Argo CD Image Updater can check for new versions of the container images that are deployed with your Kubernetes workloads and automatically update them to their latest allowed version using Argo CD. It works by setting appropriate application parameters for Argo CD applications, i.e. similar to argocd app set --helm-set image.tag=v1.0.1 - but in a fully automated manner.
auto-update-features
With auto image update, you just need to update the image in the docker registry and the image updater will take of the rest.
Here is the minimal annotation that will be required for the image updater to consider the specific application
annotations:
argocd-image-updater.argoproj.io/image-list: image-alias=1234.dkr.ecr.us-east-1.amazonaws.com/staging-app
argocd-image-updater.argoproj.io/image-alias.update-strategy: latest
argocd-image-updater.argoproj.io/image-alias.force-update: "true"
argocd-image-updater.argoproj.io/image-alias.allow-tags: "regexp:^build-version-tag-[0-9]+$"
argocd-image-updater.argoproj.io/image-alias.pull-secret: pullsecret:argocd/aws-registry-secret
argocd-demo-app

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Improving Web Performance Deployed on Okteto

I'm new in website deployment. I have this assignment where I have to deploy an existing website using kubernetes and improve its performance by doing auto scaling. I choose this app https://github.com/IBM/MAX-Image-Caption-Generator-Web-App. I deploy the app using okteto and here are steps that I've done:
Download the source code into my local computer
Okteto up
Okteto build
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator/master/max-image-caption-generator.yaml
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator-Web-App/master/max-image-caption-generator-web-app.yaml
App is successfully deployed
My question is how can I auto scale the app? Should I create my own docker image or I could just change the existing yaml configuration on my local and re-deploy the app?
I'm sorry if my explanation and question are not clear enough.
i checked your file YAML, i think you need to create obviously new docker images so that you will be able to deploy your website.
This is their docker image: image: quay.io/codait/max-image-caption-generator-web-app:latest
You can use the YAML for ref at least and update the image to deploy your website.
For auto-scaling you have to use the HPA object in Kubernetes, you can configure the autoscaling using YAML only and apply those changes simply.
Read more about HPA : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
A simple example for HPA scaling web application with YAML: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

Application deployment over EKS using Jenkins

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins. How is the deployment files updated based on the change of the docker image. If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins.
Make sure that your Jenkins instance has an IAM Role and updated kubeconfig so that it can access the Kubernetes cluster. If you consider running the pipeline on the Kubernetes cluster, Jenkins X or Tekton Pipelines may be good alternatives that are better designed for Kubernetes.
How is the deployment files updated based on the change of the docker image.
It is a good practice to also keep the deployment manifest in version control, e.g. Git. This can be in the same repository or in a separate repository. For updating the image after a new image is built, consider using yq. An example yq command to update the image in a deployment manifest (one line):
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==<myapp>).image' \
<my-registy-host>/<my-image-repository>/<my-image-name>:<my-tag-name>
If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Nope, Kubernetes Yaml is declarative so it "understand" what is changed and only "drives" the necessary deployments to its "desired state" - since the other deployments already are in its "desired state".

Using kubectl roll outs to update my images, but need to also keep my deployment object in version control

In My CICD, I am:
generating a new image with a unique tag. foo:dev-1339 and pushing it to my image repo (ECR).
Then I am using a rolling update to update my deployment.
kubectl rolling-update frontend --image=foo:dev-1339
But I have a conflict here.
What if I also need to update some part of my deployment object as stored in a deployment.yaml file. Lets say harden a health check or add a parameter?
Then when I re apply my deployment object as a whole it will not be in sync with the current replica set, the tag will get reverted and I will lose that image update as it exists in the cluster.
How do I avoid this race condition?
A typical solution here is to use a templating layer like Helm or Kustomize.
In Helm, you'd keep your Kubernetes YAML specifications in a directory structure called a chart, but with optional templating. You can specify things like
image: myname/myapp:{{ .Values.tag | default "latest" }}
and then deploy the chart with
helm install myapp --name myapp --set tag=20191211.01
Helm keeps track of these values (in Secret objects in the cluster) so they don't get tracked in source control. You could check in a YAML-format file with settings and use helm install -f to reference that file instead.
In Kustomize, your CI tool would need to create a kustomize.yaml file for per-deployment settings, but then could set
images:
- name: myname/myapp
newTag: 20191211.01
If you trust your CI tool to commit to source control then it can check this modified file in as part of its deployment sequence.
Imperative vs Declarative workflow
There is two fundamental ways of using kubectl for applying changes to your cluster. The Imperative way, when you do commands is a good way for experimentation and development environment. kubectl rolling-updated is an example of an imperative command. See Managing Kubernetes using Imperative Commands.
For a production environment, it is recommended to use a Declarative workflow, by editing manifest-files, store them in a Git-repository. Automatically start a CICD work when you commit or merge. kubectl apply -f <file> or more interesting kubectl apply -k <file> is an example of this workflow. See Declarative Management using Config files or more interesting Declarative Management using Kustomize
CICD for building image and deployment
Building an artifact from source code, including a container image may be done in a CICD pipeline. Managing application config and applying it to the Kubernetes cluster may also be done in a CICD pipeline. You may want to automatize it all, e.g. for doing Continuous Deployment and combine both pipelines to a single long pipeline. This is a more complicated setup and there is no single answer on how to do it. When the build-parts is done, it may trigger an update of the image field in the app configuration repository to trigger the configuration-pipeline.
Unfortunately there is no solution, either from the command line or through the yaml files
As per the doc here, "...a Deployment is a higher-level controller that automates rolling updates of applications declaratively, and therefore is recommended" over the use of Replication Controllers and kubectl rolling-update. Updating the image of a Deployment will trigger Deployment's rollout.
An approach could be to update the Deployment configuration yaml (or json) under version control in the source repo and apply the changed Deployment configuration from the version control to the cluster.

Auto update pod on every image push to GCR

I have a docker image pushed to Container Registry with docker push gcr.io/go-demo/servertime and a pod created with kubectl run servertime --image=gcr.io/go-demo-144214/servertime --port=8080.
How can I enable automatic update of the pod everytime I push a new version of the image?
I would suggest switching to some kind of CI to manage the process, and instead of triggering on docker push triggering the process on pushing the commit to git repository. Also if you switch to using a higher level kubernetes construct such as deployment, you will be able to run a rolling-update of your pods to your new image version. Our process is roughly as follows :
git commit #triggers CI build
docker build yourimage:gitsha1
docker push yourimage:gitsha1
sed -i 's/{{TAG}}/gitsha1/g' deployment.yml
kubectl apply -f deployment.yml
Where deployment.yml is a template for our deployment that will be updated to new tag version.
If you do it manually, it might be easier to simply update image in an existing deployment by running kubectl set image deployment/yourdeployment <containernameinpod>=yourimage:gitsha1
I'm on the Spinnaker team.
Might be a bit heavy, but without knowing your other areas of consideration, Spinnaker is a CD platform from which you can trigger k8s deployments from registry updates.
Here's a codelab to get you a started.
If you'd rather shortcut the setup process, you can get a starter Spinnaker instance with k8s and GCR integration pre-setup via the Cloud Launcher.
You can find further support on our slack channel (I'm #stevenkim).
It would need some glue, but you could use Docker Hub, which lets you define a webhook for each repository when a new image is pushed or a new tag created.
This would mean you'd have to build your own web API server to handle the incoming notifications and use them to update the pod. And you'd have to use Docker Hub, not Google Container Repository, which doesn't allow web hooks.
So, probably too many changes for the problem you're trying to solve.