How to check if my kubernetes cluster has resources to deploy all my softwares - kubernetes

I want to deploy many softwares in a kubernetes cluster. I have information like the configuration of each software like the number of pods, request and limits of cpu and RAM for each software.
My requirement is all the softwares should be provisioned successfully or none of them should be provisioned even if one software fails. Failure can be because there are no enough resources in the kubernetes cluster
How do I check if my cluster has sufficient resources to provision all the softwares even before actual deployment of the softwares

AFAIK kubernetes does not support deploying either all or none application.
I think you have to do the maths by yourself.
You said, every information you need is there (Requirements for all the Services).
This should help you planning your clusters dimensions.
Know you should calculate this on node basis. Lets say, you need 16GB Memory. Your nodes bring 8gb per Machine. Your Cluster should provide at least 24GB (3 Nodes) Memory for your application (beside monitoring tools etc.).
Always calculate something on top, because OS and Monitoring-Tools will take a little bit of your nodes resource.

Related

Scaling from local VM minikube/microk8s testing to testing on real cluster on AWS

All -
I am a K8s newbie. I am trying to get a test a setup going with the following intention – I want to be able to have a setup configuration that I can use without much modification regardless of whether I use microk8s/minikube to run K8s or EKS on AWS. Is this possible?
Specifically, I want to setup a cluster with N components. For local testing each of these components may ask for small amount of resources for local VM testing (like 2GB disk, 1GB RAM, dns resolution locally etc.) while for the EKS testing these components will be requesting much larger resources (255GB disk, 8GB RAM, real load balancer instead of just dns resolution in EKS etc.)
Are there some resources to help me accomplish that? Any links to articles/books will help on this topic

How to mange resource hungry Istio default/SDS installation?

I'm using Istio at the moment combined with the cert-manager. Because I need to have multiple certificates I'm using SDS instead of the volume mount approach.
But the hardware requirements for this stuff are really high. For GKE it is recommended to use a node-pool of 4x n1-standard-2 machines. This sums up to 200$ per month just for Istio. The recommendation for EKS is 2x m5.large machines. So it is a little bit cheaper but still around 150$. What confuses me is, that Minikube "just" needs 4vCPUs and 16GB memory in total which is round about the half of the requirements for GKE and EKS.
You'll see the resource hungry components by looking at the istio-system namespace, I mean especially the limits. For me it is:
istio-telemetry > 1100m / 6800m (requested / limits)
istio-policys (I have 5 of them) > 110m / 2000m
My question is:
Did you manage to reduce the limits without facing issues in production?
What node-pool size / machine type are your running your Istio plane?
Did someone tried auto-scaling for this node-pool? Did it reduce the costs?
Kind regards from Berlin.
Managed Istio for GKE is offered by Google as a pre-configured bundle. 4x n1-standard-2 is recommended to provide enough resources for all Istio components being installed.
Downsizing a cluster below the recommended size does not make sense.
Installation of managed Istio onto a standard GKE cluster (3x n1-standard-1)
will fail due to lack of resources. Besides that you wouldn't have
free computing capacity for your workloads. Recommended cluster size
seems reasonable.
Apart from recommended hardware configuration (4x n1-standard-2),
managed Istio can be installed and running on a cluster with configuration
8x n1-standard-1.
Taking into account mentioned in the point ##1, autoscaling could be beneficial
mostly for volatile workloads, but won't help that much for saving resources
allocated for Istio.
If the managed Istio for GKE seemed too resource consuming, you could install original version of Istio and select an installation profile with the components you actually need, as described here:
Customizable Install with Helm

What's the maximum number of Kubernetes namespaces?

Is there a maximum number of namespaces supported by a Kubernetes cluster? My team is designing a system to run user workloads via K8s and we are considering using one namespace per user to offer logical segmentation in the cluster, but we don't want to hit a ceiling with the number of users who can use our service.
We are using Amazon's EKS managed Kubernetes service and Kubernetes v1.11.
This is quite difficult to answer which has dependency on a lot of factors, Here are some facts which were created on the k8s 1.7 cluster kubernetes-theresholds the Number of namespaces (ns) are 10000 with few assumtions
The are no limits from the code point of view because is just a Go type that gets instantiated as a variable.
In addition to link that #SureshVishnoi posted, the limits will depend on your setup but some of the factors that can contribute to how your namespaces (and resources in a cluster) scale can be:
Physical or VM hardware size where your masters are running
Unfortunately, EKS doesn't provide that yet (it's a managed service after all)
The number of nodes your cluster is handling.
The number of pods in each namespace
The number of overall K8s resources (deployments, secrets, service accounts, etc)
The hardware size of your etcd database.
Storage: how many resources can you persist.
Raw performance: how much memory and CPU you have.
The network connectivity between your master components and etcd store if they are on different nodes.
If they are on the same nodes then you are bound by the server's memory, CPU and storage.
There is no limit on number of namespaces. You can create as many as you want. It doesn't actually consume cluster resources like cpu, memory etc.

Kubernetes - Single Cluster or Multiple Clusters

I'm migrating a number of applications from AWS ECS to Azure AKS and being the first production deployment for me in Kubernetes I'd like to ensure that it's set up correctly from the off.
The applications being moved all use resources at varying degrees with some being more memory intensive and others being more CPU intensive, and all running at different scales.
After some research, I'm not sure which would be the best approach out of running a single large cluster and running them all in their own Namespace, or running a single cluster per application with Federation.
I should note that I'll need to monitor resource usage per application for cost management (amongst other things), and communication is needed between most of the applications.
I'm able to set up both layouts and I'm sure both would work, but I'm not sure of the pros and cons of each approach, whether I should be avoiding one altogether, or whether I should be considering other options?
Because you are at the beginning of your kubernetes journey I would go with separate clusters for each stage you have (or at least separate dev and prod). You can very easily take your cluster down (I did it several times with resource starvation). Also not setting correctly those network policies you might find that services from different stages/namespaces (like test and sandbox) communicate with each other. Or pipelines that should deploy dev to change something in other namespace.
Why risk production being affected by dev work?
Even if you don't have to upgrade the control plane yourself, aks still has its versions and flags and it is better to test them before moving to production on a separate cluster.
So my initial decision would be to set some hard boundaries: different clusters. Later once you get more knowledge with aks and kubernetes you can review your decision.
As you said that communication is need among the applications I suggest you go with one cluster. Application isolation can be achieved by Deploying each application in a separate namespace. You can collect metrics at namespace level and can set resources quota at namespace level. That way you can take action at application level
A single cluster (with namespaces and RBAC) is easier to setup and manage. A single k8s cluster does support high load.
If you really want multiple clusters, you could try istio multi-cluster (istio service mesh for multiple cluster) too.
Depends... Be aware AKS still doesn't support multiple node pools (On the short-term roadmap), so you'll need to run those workloads in single pool VM type. Also when thinking about multiple clusters, think about multi-tenancy requirements and the blast radius of a single cluster. I typically see users deploying multiple clusters even though there is some management overhead, but good SCM and configuration management practices can help with this overhead.

Building low power HA cluster for self hosted services/blog

I would like to setup a HA swarm / kubernetes cluster based on low power architecture (arm).
My main objective is to learn how works a HA web cluster, how it reacts to failures and recover from them, how easy it is to scale.
I would like to host a blog on it as well as other services once it is working (git / custom services / home automation / CI server / ...).
Here are my first questions:
Regading the hardware, which is the more appropriate ? Rpi3 or Odroid-C2 or something else? I intend to have 4-6 nodes to start. Low power consumption is important to me since it will be running 24/7 at home
What is the best architecure to follow ? I would like to run everything in container (for scalability and redudancy), and have redundant load balancer, web servers and databases. Something like this: architecture
Would it be possible to have web server / databases distributed on all the cluster, and load balancing on 2-3 nodes ? Or is it better to separate it physically?
Which technology is the more suited (swarm / kubernetes / ansible to deploy / flocker for storage) ? I read about this topic a lot lately, but there are a lot of choices.
Thanks for your answers !
EDIT1: infrastructure deployment and management
I have almost all the material and I am now looking in a way to easily manage and deploy the 5 (or more) PIs. I want the procedure to be as scalable as possible.
Is there some way to:
retrieve an image from network the first time (PXE boot like)
apply custom settings for each node: network config (IP), SSH access, ...
automatically deploy / update new software on servers
easily add new nodes on the cluster
I can have a dedicated PI or my PC that would act as deployment server.
Thanks for your inputs !
Raspberry Pi, ODroid, CHIP, BeagleBoard are all suitable hardware.
Note that flash card has a limited lifetime if you constantly read/write to them.
Kubernetes is a great option to learn clustering containers.
Docker Swarm is also good.
None of these solutions provide distributed storage, so if you're talking about a PHP type web server and SQL database which are not distributed, then you can't really be redundant even with Kubernetes or Swarm.
To be effectively redundant, you need master/slave setup for the DB, or better a clustered database like elasticsearch or maybe the cluster version of MariaDB for SQL, so you have redundancy provided by the database cluster itself (which is not a replacement for backups, but it better than a single container)
For real distributed storage, you need to look at technologies like Ceph or GlusterFS. These do not work well with Kubernetes or Swarm because they need to be tied to the hardware. There is a docker/kubernetes Ceph project on Github, but I'd say it is still a bit hacky.
Better provision this separately, or directly on the host.
As far as load balancing is concerned, you ay want to have a couple nodes with external load balancers for redundancy, if you build a Kubernetes cluster you don't really chose what else may run on the same node, except by specifying CPU/RAM quota and limits, or affinity.
If you want to have a try to Raspberry Pi 3 in Kubernetes here is a step by step tutorial to setup your Kubernetes cluster with Raspberry Pi 3:
To prevent de read/write issue, you might consider purchasing and additional NAS device and mount it as volume to yours pods
Totally agree with MrE with the distributed storage for PHP-like. Volume lifespan is per pod, and is tied to the pod. So you cannot share one Volume between pods.