Need to achieve timesync across all nodes and applications in the cluster.Is k8tz will be the best option ?
If not how to achieve the timesync in cluster with least effort
Related
We would like to pack as many pods into each nodes in our cluster as much as possible do decrease the amount of nodes we have on some of our environments. I saw https://github.com/kubernetes-sigs/descheduler HighNodeUtilization strategy which seems to fit the bill for what we need. However, it seems the cluster needs to have the scoring strategy MostAllocated to work with this.
I believe that the kube-scheduler in EKS in inaccessible to be configured. How do I then configure the MostAllocated scoring strategy?
Better yet, how do I configure this automated packing of pods in as little nodes as possible in a cluster without the use of Descheduler?
Tried deploying the descheduler as is without the MostAllocated scoring strategy configured. Obviously did not provide the results expected.
Many of my digging online led to having to create a custom-scheduler, but I have found little/unclear resources to be able to do so.
Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.
We have a requirement to setup a geo redundant cluster. I am looking at sharing an external etcd cluster to run two kubernetes clusters. It may sound absurd at first, but the requirements have come down to it..I am seeking some direction to whether it is possible, and if not, what are the challenges.
Yes it is possible, you can have a single etcd cluster and multiple k8s clusters attached to it. The key to achieve it, is to use -etcd-prefix string flag from kubernetes apiserver. This way each cluster will use different root path for storing its resources and avoid possible conflict with second cluster in the etcd. In addition to it, you should also setup the appropriate rbac rules and certificates for each k8s cluster. You can find more detailed information about it in the following article: Multi-tenant external etcd for Kubernetes clusters.
EDIT: Ooh wait, just noticed that you want to have those two clusters to behave as master-slave. In that case you could achieve it by assign to the slave cluster a read-only role in the etcd and change it to read-write when it has to become master. Theoretically it should work, but I have never tried it and I think the best option is to use builtin k8s mechanism for high-availability like leader-election.
I have three nodes, a master which is geographically located elsewhere, and the two other nodes that are close, but not on the same network. I've create a cluster with those three, and now, I want to make a tunnel between the two (close) nodes to compare the benefits to communicate without going to the master, and then come back.
I've search a little, and found out these charts:
https://github.com/helm/charts/tree/master/stable/openvpn.
Can I use it to create the VPN between the 2 workers nodes?
Thanks for the help
Is not a good idea to use a helm chart for a VPN if you are trying to use it for the kubernetes internal communications.
My advice is to configure the VPN on the nodes itself but that comes with more problems of automation and availability.
What is the main idea of having that setup, can you use some external VPN service instead of installing inside the cluster? have you tried with peering instead of VPN?
Some actual cloud providers allow you to have easy turnkey clusters, have you tried it?
UPDATE
As per comments maybe two more solutions are good ones by itself or in combination:
Istio https://istio.io/
gRPC https://grpc.io/ in conjunction of mTLS
I'm new to Kubernetes, and after I've seen how huge it is I thought I'd ask for a bit of help.
The purpose of my company is to deploy a set of apps independantly for each of our clients. Say we have an app A, we want to deploy a first version for client 1, another version for client 2, etc. We will have a lot of clients in the future (maybe around 50). Of course we want to be able to manage them easily.
Which part of Kubernetes should I explore to achieve this, or if kubernetes is not fit for this what else should I consider ?
Thanks !
There is concept in kubernetes as namespaces which are isolated with each other and provide the isolation between deployments in it.
so you can use and explore the namespaces in kubernetes which will help to isolate the client versions and deployments.
if kubernetes is not fit for this what else should I consider ?
if donot think it may happen for your requirement kubernetes having lots of options for deployment for zero downtime in service, with kubernetes you can implement CI/CD so i think kubernetes will be easy to setup and manage any application.