I'm new to Kubernetes, and after I've seen how huge it is I thought I'd ask for a bit of help.
The purpose of my company is to deploy a set of apps independantly for each of our clients. Say we have an app A, we want to deploy a first version for client 1, another version for client 2, etc. We will have a lot of clients in the future (maybe around 50). Of course we want to be able to manage them easily.
Which part of Kubernetes should I explore to achieve this, or if kubernetes is not fit for this what else should I consider ?
Thanks !
There is concept in kubernetes as namespaces which are isolated with each other and provide the isolation between deployments in it.
so you can use and explore the namespaces in kubernetes which will help to isolate the client versions and deployments.
if kubernetes is not fit for this what else should I consider ?
if donot think it may happen for your requirement kubernetes having lots of options for deployment for zero downtime in service, with kubernetes you can implement CI/CD so i think kubernetes will be easy to setup and manage any application.
Related
I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.
The application is an HTTP web backend.
Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.
I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.
So to summarize my question, does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?
The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.
I see 2 approaches:
In the new cluster get a new IP address and update the DNS record to point to the new load balancer
See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways
Few pain points you'll run into:
As someone used to DIY Kubernetes I hate GKE's managed Ingress Certs, because they make it very hard to pre-provision HTTPS certs in advance on the new cluster. (GKE's defacto method of provisioning HTTPS certs is to update DNS to point to the LB, and then wait 10-60 minutes. That means if you cutover to a new cluster the new cluster's HTTPS cert supplied by a managedcertificate Custom Resource, won't be ready in advance.)
It is possible to use pre-provision HTTPS certs using ACME-DNS challenge on GCP, but it's poorly documented and a god awful UX(user experience), there's no GUI, and the CLI API is terrible.
You can do it using gcloud services enable certificatemanager.googleapis.com, but I'd highly recommend against
that certificate-manager service that went GA in June, 2022. The UX is painful.
GKE's official docs are pretty bad when it comes to this scenario
You basically want to do 2 things:
Follow this how to guide for a zero downtime HTTPS cutover from cluster1 to cluster2 by leveraging Lets Encrypt Free Cert
https://gist.github.com/neoakris/4aafeac7628995da8dd423f1702c975b
(I know link only answers are bad, but it's github (great uptime) and it's way too long and nuanced to post here.)
Use Velero to migrate workloads from cluster1 to cluster2 (it migrate can do CRDs, CRs, generic yaml objects, and PV/PVCs. One thing of note is Velero works best when you're migrate to and from a cluster of the same version, if you go from a really old version to a really new version you could encounter issues where kubernetes yaml APIs got removed in the new version. Going from old version to new version can be done, but it's best left to an experienced hand. For Happy Path results migrating to and from a cluster of the same version is best.)
I am wondering how to deploy multiple applications such as springboot app, nodejs app etc.on a single kubernetes cluster that has a single istio load balancer.
Is it pssible?
I am a beginner in devops so need some guidance on this.
Thank you for suggestions.
Yes, it's possible. Moreover this is the exact purpose of the LoadBalancer - to be a single point of entrance for multiple applications.
If you deploy an example application, you will create three versions of reviews application (reviews-v1, reviews-v2, reviews-v3 - as far as K8s and Istio are concerned, those are three different apps). With the use of Virtual Services and Destination rules, Istio manages traffic between those three applications.
Since you are a beginner, I would strongly recommend thorough read of Istio documentation, especially Tasks and Examples
Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.
I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.
The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.
After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.
I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.
I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?
Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace.
Why risk production being affected by dev work?
Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.
So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.
As you said that communication is need among the applications I suggest you go with one cluster. Application isolation can be achieved by Deploying each application in a separate namespace. You can collect metrics at namespace level and can set resources quota at namespace level. That way you can take action at application level
A single cluster (with namespaces and RBAC) is easier to setup and manage. A single k8s cluster does support high load.
If you really want multiple clusters, you could try istio multi-cluster (istio service mesh for multiple cluster) too.
Depends... Be aware AKS still doesn't support multiple node pools (On the short-term roadmap), so you'll need to run those workloads in single pool VM type. Also when thinking about multiple clusters, think about multi-tenancy requirements and the blast radius of a single cluster. I typically see users deploying multiple clusters even though there is some management overhead, but good SCM and configuration management practices can help with this overhead.
I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,
When I am using Kubernetes, how I can create clusters and nodes ?
Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?
I new to containerizing, Docker and Kubernetes world.
My favorite way to create clusters is kubespray because I find ansible very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a vagrant configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"
But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to debug those mechanisms when something inevitably goes wrong
The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.
As for the second part of your question, a Service in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a Service if you indicate that is what you would like (and similar behavior for other cloud providers)
If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque