Let's say I have baremetal servers at New York, London, Delhi, Beijing. My requirement is to join all the 4 baremetal servers in a distributed environment and run services on top of kubernetes. How do I achieve this ?
Question is too broad but here's my take with insights on a global datastore:
Option 1: Setup a K8s cluster for each region and have them talk to each other through services. Have each talk to their own datastores, and keep data separate per region an use something like GSLB to route your traffic.
Option 2. Setup a K8s cluster for each region and use a global database like Dynamdb Global Tables, Cloud Spanner or CosmosDB.
Option 3. Use Kubernetes Federation. Federation doesn't necessarily solve the multi-region data store challenge, and it's also in Beta as of this writing. However, it will help manage your Kubernetes services across multiple regions.
Option 4. Something else, that you create on your own, fun! fun!
Related
I am now running two kubernetes clusters.
First Cluster is running on bare metal, and Second Cluster is running on EKS.
but since maintaining EKS costs a lot, so I am finding ways to change this service as Single Cluster that autoscales on AWS.
I did tried to consider several solutions such as RHACM, Rancher and Anthos.
But those solutions are for controlling multi cluster.
I just want to change this cluster as "onpremise based cluster that autoscales (on AWS) when lack of resources"
I could find "EKS anywhere" solution but since price is too high, I want to build similar architecture.
need advice for any use cases for ingress controller, or (physical) loadbalancer, or other architecture that could satisfies those conditions
Cluster API is probably what you need. It is a concept of creating Clusters with Machine objects. These Machine objects are then provisioned using a Provider. This provider can be Bare Metal Operator provider for your bare metal nodes and Cluster API Provider AWS for your AWS nodes. All resting in a single cluster (see the docs below for many other provider types).
You will run a local Kubernetes cluster which will have the Cluster API running in it. This will include components that will allow you to be able to create different Machine objects and tell Kubernetes also how to provision those machines.
Here is some more reading:
Cluster API Book: Excellent reading on the topic.
Documentation for CAPI Provider - AWS.
Documentation for the Bare Metal Operator I worked on this project for a couple of years and the community is pretty amazing. This GitHub repository hosts the CAPI Provider for bare metal nodes.
This should definitely get you going. You can start by running different providers individually to get a taste of how they work and then work with Cluster API and see it in function.
We are currently designing a micro-services based architecture, by dividing monolith app to microservices.
Earlier, the monolith was present in 2 different regions viz. US and Asia. US instances would get request from USA and Asia ones will get request from Asian nations.
Now, we want to use AWS EKS for deploying micro services. I checked that EKS is deployed in 3 different AZs for a region to maintain HA. In our scenario, do we need to have 2 different setups of AWS EKS, one in each US and Asia?
Is it possible that I can use only one EKS and worker nodes in different regions?
The architecture comprises of AWS EKS, spring boot micro services, angular 5 apps, docker and kubernetes.
As far as I know you are not able simply use worker nodes from different regions in EKS clusters. Also there is still unanswered question How can we achieve Multi Region Worker nodes for EKS cluster? on github.
What you can is is to use Kubernetes Cluster Federation. You will need kubefedctl tool to join the clusters. I have never done this on my own exactly on EKS, but please check 2 articles below that show you the ways to create a federation exactly on EKS.
Federation v2 and EKS
Build a Federation of Multiple Kubernetes Clusters With Kubefed V2
Or create 2 separate clusters in required regions
another option to explore is CAPE which provides for KubeFed functionality but through an intuitive UI and extended for multi-clusters. You can set up a host cluster to deploy and manage cluster configuration and app deployment to multiple clusters across regions with a few button clicks. CAPE is free for life up to 10 nodes :)
Is HA across multiple cloud providers i.e ONE kubernetes cluster from mix of Azure nodes, AWS nodes, VMware nodes. (Consider all have same OS image)
If so how dynamic provisioning works.
Can Kubernetes CSI (container storage interface) help me with this.
That will not work very well. The cloud provider needs to be set on the apiserver & controller-manager and you can't run multiple copies of those in different configurations.
Now if you don't need a cloud provider, as in you are just using these as generic VMs, you will not have access to cloud storage via the kubernetes api. Otherwise it's workable but is still not a great setup. This would essentially be a cross region cluster which is not a supported use case. You are meant to use 1 cluster per region and arrange for LB somehow (yes, this is the tricky bit).
I wish to know how to create a GCP Kubernetes Engine cluster spanning two regions. For instance, a cluster has some instances at "us-west1" region, and others at "us-central1" region.
My use case is to verify "failure-domain.beta.kubernetes.io/region" topology key is working as expected. I am aware of:
1. cluster federation: not supported yet for Kubernetes Engine
2. multi-cluster ingress: in development, but may not something I am looking for
3. regional cluster: not applicable as it focuses on replication in only one region
I am aware that my use case is not atypical.
It is possible, but I cannot say that will be a stable and fully functional configuration.
There are no standard tools to do what you want, but you can connect external nodes to your cluster from a different region manually. It will not work with kubeadm, but if you will setup kubelet manually - it will work, but with many limitations:
No auto-updates.
You should manage the connection between regions manually (you should have a private network with direct routing within all your nodes).
You can have problems with logs, monitoring, load balancing, etc.
You will pay for the traffic between internal and external nodes as for the external traffic.
Finally, although it is possible, I cannot recommend you to use it. If you really want to get a multi-region cluster - setup it yourself by kubeadm and use kubefed to create a federation.
We want to run a multi-tenant scenario that requires tenant separation on a network level.
The idea is that every tenant receives a dedicated node and a dedicated network that other tenants nodes can join. Tenant nodes should be able to interact with each other in that network.
Networks should not be able to talk with each other (true network isolation).
Are there any architectural patterns to achieve this?
One Kubernetes cluster per tenant?
One Kubernetes cluster for all tenants, with one subnet per tenant?
One Kubernetes cluster across VPCs (speaking in AWS terms)?
The regular way to deal with multi-tenancy inside kubernetes is to use namespaces. But this is within a kube cluster, meaning you still have the same underlying networking solution shared by all tenants. That is actualy fine, as you have Network Policies to restrict networking in the cluster.
You can obviously run autonomous clusters per tenant, yet this is not exactly multi-tenancy then, just multiple clusters. Networking can be configured on node level to route as expected, but you'd still be left with an issue of cross-cluster service discovery etc. Federation can help a bit with that, but I would still advise to chase Namespaces+Policies approach.
I see four ways to run multi-tenant k8s clusters at network-level:
Namespaces
Ingress rules
allow/deny and ingress/egress Network Policies
Network-aware Zones