Application Load Balancers in an EKS cluster - kubernetes

I'm trying to figure out ways to automate k8s deployments in an EKS cluster. I'm trying to set up namespaces for each specific environment. One for dev, one for staging, and one for production. My production namespace is in a separate region and also in a separate cluster (dev & staging are in one cluster). I'm a little new to this concept, but does it make sense to have each respective application load balancer in it's respective namespace? Is that practice common or best practice? Any ideas on automating deployments would be appreciated.

Hi Dave Michaels,
I assume there are two questions in your post above:
If we use a dedicated namespace in the same cluster (dev & staging setup), can we use a dedicated load balancer for each of these namespaces? Is this good practice.
Answer: Yes. As you are using the namespace concept for each environment in the same cluster, it is Ok to create a dedicated load balancer (promise me you will use ingress :)) in each of these namespaces as we need an easier way to access those environments. To be frank, I am not a fan of using namespaces for environments, because as your cluster grows and lots of microservices getting added to it, you might want to use namespace for another reason eg., namespace per team or domain to have granular access rights. But I have seen teams using it for different environments successfully as well.
Suggest automated Kubernetes deployments possibilities?
This is a large topic by itself.
As your microservices grow, you will have multiple Kubernetes manifests to handle, first thing I will suggest is to either use a configuration manager like Kustomize or a package manager like Helm to segregate variables from actual manifests, this will help to easily automate deployment across environments (same cluster or different clusters). Coming to actual deployment automation, if there is no existing CD in place I would suggest exploring tools that support natively Kubernetes that supports GitOps, like FluxCD or ArgoCD etc

Related

Kubernetes: How to manage multiple separate deployments of the same app

We're migrating our app's deployments from using VMs to Kubernetes and as my knowledge of Kubernetes is very limited, I'm lost how I could set up deployment for multiple clients.
Right now we have a separate VM for each client but how to separate the clients in Kubernetes in a way that will be cost and resource efficient and easy to manage?
I managed to create dev and staging environments using namespaces and this is working great.
To update dev and staging deployment I just use kubectl apply -f <file> --namespace staging.
Now I need to deploy app to production for several clients (50+). They should be completely separate from each other (using separate environment variables and secrets) while code should be the same. And I don't know what is the best way to achieve that.
Could you please hint me what is the right way for that in Kubernetes?
You can use Kustomize. It provides purely declarative approach to configuration customization to manage an arbitrary number of distinctly customized Kubernetes configurations.
https://github.com/kubernetes-sigs/kustomize/tree/master/examples
one (or a set of namespaces) by customer
kustomize has a very good patterns system to handle generic configuration and several adaptation by clients
use NetworkPolicy to isolate network between clients

Deploying multiple versions of the same software in Kubernetes

I'm planning to migrate the deployment process from a traditional deployment tool (Octopus) to Kubernetes and as my knowledge of Kubernetes is very limited, I'm lost how I could set up deployment for multiple clients. I have a CMS-like web-site and I need to deploy it to dev/stage/production for several clients (different servers). Could you please hint me what is the right abstraction for that in Kubernetes?
Option 1 (the easiest): Kubernetes namespace.
Create different namespaces for dev/stage/production. Install same name/label resources there and they will not overlap.
Option 2: Helm chart with the release name tied to every resource. Example chart https://github.com/helm/charts/tree/master/stable/wordpress. When you do this https://github.com/helm/charts/blob/master/stable/wordpress/templates/deployment.yaml#L19 resource references do not overlap even in the same namespace.
Option 3: Do both at time :)

Kubernetes - Single Cluster or Multiple Clusters

I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.
The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.
After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.
I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.
I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?
Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace.
Why risk production being affected by dev work?
Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.
So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.
As you said that communication is need among the applications I suggest you go with one cluster. Application isolation can be achieved by Deploying each application in a separate namespace. You can collect metrics at namespace level and can set resources quota at namespace level. That way you can take action at application level
A single cluster (with namespaces and RBAC) is easier to setup and manage. A single k8s cluster does support high load.
If you really want multiple clusters, you could try istio multi-cluster (istio service mesh for multiple cluster) too.
Depends... Be aware AKS still doesn't support multiple node pools (On the short-term roadmap), so you'll need to run those workloads in single pool VM type. Also when thinking about multiple clusters, think about multi-tenancy requirements and the blast radius of a single cluster. I typically see users deploying multiple clusters even though there is some management overhead, but good SCM and configuration management practices can help with this overhead.

Kubernetes, Automatic Service fallback to another namespace

I have multiple environments represented by multiple namespaces in my kubernetes.
All application has its service endpoints defined in each namespace.
And we have three environments, dev, alpha, and beta. (Which is equivalent of dev, test, and stage). These environments are permanent, which means all the applications are running there.
Now in my team, there are few parallel development happening, for which we are planning to create multiple environments for the release and which will be only having few applications which are part of that release.
Let's think of this example: I am building feature1 and have an impact on app1 and app2
There are 10 other apps which are not having any impact.
So for my development and the parallel testing majority of services that I have to point to existing alpha or beta env and only point the app1 and app2 in the namespace.
I could achieve this by having an ExternalName mapping for all other services.
But if I have more than 100 services and managing the external endpoint in a yaml I feel very difficult.
Is there any way that I can route all the traffic to another namespace(If there exist no service with that name.)
Is there a way for global ExternalName for a Namespace?
As far as I know, it is not possible to redirect traffic to a different namespace based on the existing pods or Services in the current namespace. You can select a destination for Service only by changing its YAML configuration, and it is only possible to select pods in the same namespace.
You can simplify the deployment procedure by using Helm charts. It allows you to put variables into YAML configuration of Deployments, Services, etc, and use separate value file to substitute them during installation to the cluster. Here is a link to a blog post on Using Helm to deploy to Kubernetes

Linking kubernetes namespace to nodes

It seems to be good practice to map environments such as dev, qa and production to kubernetes namespaces. To achieve "true" separation, it seems to be a good idea to label nodes exclusively dedicated to one of those namespaces and ensure resources in those environments get scheduled on those nodes only. That's at least our current thinking. There may be manifests one might want to use in those namespaces that should/must not be tampered with. Kubernetes does not seem to support associating namespaces with nodes out of the box. PodNodeSelector admission controller seems close but is not quite what we are looking for. The only option to achieve what we want seems to be a custom mutating admission webhook as documented here.
I guess other people have been here before and there is a solution to address our primary concern which is that we don't want load on dev or qa environments impacting production performance.
Is there any off the shelf solution linking namespaces to nodes?
If not, are there any alternative approaches ensuring that environments do not interfere in terms of load/performance?
I guess other people have been here before and there is a solution to address our primary concern which is that we don't want load on dev or qa environments impacting production performance.
Been there, got burned by it.
Multiple environments in one cluster might be a good idea under certain circumstances but mixing dev/qa/stage with production in a single cluster spells trouble. Load itself might not be the main issue, especially if you mitigate effects with proper resource allocation, but any tweak, modification and dev-process induced outage on kube-system pods affects production directly. You can't test updates on kubernetes system components beforehand, any cni issue on dev can slow down or render inoperable production and so on... We went down that path and don't recommend it.
With that being said, separation as such is rather easy. On one of our clusters we do keep dev/qa/stage environments for some projects in single cluster, and separate some of the resources with labels. Strictly speaking not really env-separated but we do have dedicated nodes for elk covering all three environments, separate gitlab runners nodes, database nodes and so on, but principle is same. We label nodes and use nodeAffinity with nodeSelectorTerms to target group of nodes with same label for certain task/service (or environment in your case) separation. As a side notenodeSelector is depricated according to the official documentation.
In my opinion having multiple environments in one cluster is a bad idea, for many reasons.
If you are sure you want to do it, and don't want to kill the performance of production pods, you can easily attach resources to deployments/pods.
Another approach is to attach labels to nodes, and force particular pods to deploy on them using PodNodeSelector
In general it is not recommended to use namespaces to separate software environments (dev, test, staging, prod..).
The best practice is to use a dedicated cluster for each environment.
To save on costs, you can take the comprises of using:
1 cluster: dev, test, staging
1 cluster: prod
and with this setup, creation of resources namespace in the cluster shared for dev, testing, and staging get a little more annoying to be managed.
I found it very useful the motivations for using namespaces from the docs.
Still, if you need to ensure a set of nodes is dedicated only to the resources in a namespace, you should use a combination of:
podSelector to force scheduling of resources only on nodes of the set
and a taint to deny scheduling of any other resource not in the namespace on the nodes of the set
How to create a namespace dedicated to use only a set of nodes: https://stackoverflow.com/a/74617601/5482942