Improving Web Performance Deployed on Okteto - kubernetes

I'm new in website deployment. I have this assignment where I have to deploy an existing website using kubernetes and improve its performance by doing auto scaling. I choose this app https://github.com/IBM/MAX-Image-Caption-Generator-Web-App. I deploy the app using okteto and here are steps that I've done:
Download the source code into my local computer
Okteto up
Okteto build
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator/master/max-image-caption-generator.yaml
kubectl apply -f https://raw.githubusercontent.com/IBM/MAX-Image-Caption-Generator-Web-App/master/max-image-caption-generator-web-app.yaml
App is successfully deployed
My question is how can I auto scale the app? Should I create my own docker image or I could just change the existing yaml configuration on my local and re-deploy the app?
I'm sorry if my explanation and question are not clear enough.

i checked your file YAML, i think you need to create obviously new docker images so that you will be able to deploy your website.
This is their docker image: image: quay.io/codait/max-image-caption-generator-web-app:latest
You can use the YAML for ref at least and update the image to deploy your website.
For auto-scaling you have to use the HPA object in Kubernetes, you can configure the autoscaling using YAML only and apply those changes simply.
Read more about HPA : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
A simple example for HPA scaling web application with YAML: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

Related

About CI/CD for kubernetes

I am using kubernetes.
I can do docker builds from GitHub and upload them to docker hub on our own.
However, I would like to automate the creation and updating of pods.
How about Circle CI for example?
Or is it possible to use the k8s library to update the pods?
You can use ArgoCD image updator
The Argo CD Image Updater can check for new versions of the container images that are deployed with your Kubernetes workloads and automatically update them to their latest allowed version using Argo CD. It works by setting appropriate application parameters for Argo CD applications, i.e. similar to argocd app set --helm-set image.tag=v1.0.1 - but in a fully automated manner.
auto-update-features
With auto image update, you just need to update the image in the docker registry and the image updater will take of the rest.
Here is the minimal annotation that will be required for the image updater to consider the specific application
annotations:
argocd-image-updater.argoproj.io/image-list: image-alias=1234.dkr.ecr.us-east-1.amazonaws.com/staging-app
argocd-image-updater.argoproj.io/image-alias.update-strategy: latest
argocd-image-updater.argoproj.io/image-alias.force-update: "true"
argocd-image-updater.argoproj.io/image-alias.allow-tags: "regexp:^build-version-tag-[0-9]+$"
argocd-image-updater.argoproj.io/image-alias.pull-secret: pullsecret:argocd/aws-registry-secret
argocd-demo-app

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

Application deployment over EKS using Jenkins

Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins. How is the deployment files updated based on the change of the docker image. If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Can anyone tell me the deployment flow for deploying the application over Kubernetes or EKS cluster using Jenkins.
Make sure that your Jenkins instance has an IAM Role and updated kubeconfig so that it can access the Kubernetes cluster. If you consider running the pipeline on the Kubernetes cluster, Jenkins X or Tekton Pipelines may be good alternatives that are better designed for Kubernetes.
How is the deployment files updated based on the change of the docker image.
It is a good practice to also keep the deployment manifest in version control, e.g. Git. This can be in the same repository or in a separate repository. For updating the image after a new image is built, consider using yq. An example yq command to update the image in a deployment manifest (one line):
yq write --inplace deployment.yaml 'spec.template.spec.containers(name==<myapp>).image' \
<my-registy-host>/<my-image-repository>/<my-image-name>:<my-tag-name>
If we have multiple deployment files and we change any image for any one of them. Do all of them are redeployed?
Nope, Kubernetes Yaml is declarative so it "understand" what is changed and only "drives" the necessary deployments to its "desired state" - since the other deployments already are in its "desired state".

No YAML Files in K8s Deployment

TLDR: My understanding from learning all about K8s is that you need lots and lots of yaml files, however, I just deployed an app to a K8s clusters with 0 yaml files and it succeeded. Why is that? Does google cloud or K8s have defaults it uses when the app does not have any yaml file settings?
Longer:
I have a dockerized spring app that I deployed to a google cloud cluster I created via the UI.
It had 0 yaml files in there, so my expectation that kubectl deploy would fail, however, it succeeded and my stateless app is up there chugging away.
How does that work?
Well the gcp created for you in the background. I assume you pushed your docker image or CI to cluster and from there you just did few clicks right? same stuff you can do it on openshift environment. but in the background yaml file get's generated. if you edit the pod on your UI you will see that yaml file.
as above #Volodymyr Bilyachat said you can create deployment via imparative way or using declarative way(yaml). I would suggest always use declarative way.
you can see your deployment yaml file which you created from UI by doing
kubectl get deployment <deployment_name> -o yaml
kubectl get deployment <deployment_name> -o yaml > name.yaml #This will output your yaml file into name.yaml file
You can run your containers/pods using plain commands.
kubectl run podname --image=name
As you said 0 yaml files. But main idea of those files that you push them to source control and run test them via different environments using CI/CD.
Other benefit of yaml files that you can share configuration and someone else will be able to create infrastructure without having to write anything. Here is example how you can run elasticsearch with one command
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.0/all-in-one.yaml

Jenkins plugin to deploy a docker image into a k8s cluster?

I have been looking for a jenkins plugin to deploy a docker image to a kubernetes cluster using k8s api. This will access the rest api with yaml file with credentials that already configured. If there is no similar plugin, you can let me know other simple examples. Thanks for reading.
I think you are looking for a plugin's which is similar to this
https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+CI+Plugin
https://wiki.jenkins.io/display/JENKINS/Kubernetes+Pipeline+Plugin
I'm using a simple Execute Shellstep in Jenkins with the fallowing command:
kubectl --server="https://kubeapi.example.com" --token=$ACCESS_TOKEN set image deployment/deployment_name container_name=repo/image:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"
You can save your $ACCESS_TOKENas a Secret text and use it as a variable in Jenkins.
The job is making the build, tag and publish a docker image to docker repo, then set the image in the kubernetes cluster.