Kubernetes split-brain / HA across AZ - kubernetes

The Kubernetes HA documentation shows that you can ensure availability in the case of the failure of an apiserver by having multiple instances behind a load balancer.
However, it doesn't cover what happens if the Kubernetes is deployed across multiple availability zones. There is some documentation here but it doesn't really go into failure scenarios.
What is best practice here? Should you pin the api-servers to instances inside each AZ? What happens in the event of a split brain? If I have a pod running in one AZ and it becomes unavailable to the rest of the world, what happens to it?
I specifically want to know about a custom on-premise installation, not AWS or GCE.

Related

Hybrid nodes on single kubernetes cluster

I am now running two kubernetes clusters.
First Cluster is running on bare metal, and Second Cluster is running on EKS.
but since maintaining EKS costs a lot, so I am finding ways to change this service as Single Cluster that autoscales on AWS.
I did tried to consider several solutions such as RHACM, Rancher and Anthos.
But those solutions are for controlling multi cluster.
I just want to change this cluster as "onpremise based cluster that autoscales (on AWS) when lack of resources"
I could find "EKS anywhere" solution but since price is too high, I want to build similar architecture.
need advice for any use cases for ingress controller, or (physical) loadbalancer, or other architecture that could satisfies those conditions
Cluster API is probably what you need. It is a concept of creating Clusters with Machine objects. These Machine objects are then provisioned using a Provider. This provider can be Bare Metal Operator provider for your bare metal nodes and Cluster API Provider AWS for your AWS nodes. All resting in a single cluster (see the docs below for many other provider types).
You will run a local Kubernetes cluster which will have the Cluster API running in it. This will include components that will allow you to be able to create different Machine objects and tell Kubernetes also how to provision those machines.
Here is some more reading:
Cluster API Book: Excellent reading on the topic.
Documentation for CAPI Provider - AWS.
Documentation for the Bare Metal Operator I worked on this project for a couple of years and the community is pretty amazing. This GitHub repository hosts the CAPI Provider for bare metal nodes.
This should definitely get you going. You can start by running different providers individually to get a taste of how they work and then work with Cluster API and see it in function.

Does Kubernetes K8s use multple server for load balancing?

Kubernetes will be using the same server or we can use multiple servers with k8s. if yes then how it will be work ?
In case of one instance full then would it create a new instance to route everything to the new server?
If anyone can show a real example of K8s then it would be great!
For this I can suggest Kubernetes docs to start reading from but briefly,
Kubernetes deals with resources or networking in the Master nodes (Control Plane).
Worker nodes simply have the kube-proxy and basic control mechanisms coming from kubelet service. You still can not control your cluster from worker nodes.
And yes K8s can use multiple servers for LoadBalancing. This is a Possibility.
When it comes to K8s you do not have to work in a single zone so therefore you do not have to have all the pods in the same server.
So, in a single zone if you have one master and multiple worker nodes you will be using Master's scheduler and LoadBalancer to manage the resources or the traffic if necessary. If you have multiple Master nodes, then you will be using Masters' schedulers and etc.
For a real example of K8s just search for Highly-Available Kubernetes Clusters and switch to Images section. You can have a visualized opinion about them that way.
I hope I was a little bit of help. But the docs could be more helpful I suppose.

Will the master know the data on workers/nodes in k8s

I try to deploy a set of k8s on the cloud, there are two options:the masters are in trust to the cloud provider or maintained by myself.
so i wonder about that if the masters in trust will leak the data on workers?
Shortly, will the master know the data on workers/nodes?
The abstractions in Kubernetes are very well defined with clear boundaries. You have to understand the concept of Volumes first. As defined here,
A Kubernetes volume is essentially a directory accessible to all
containers running in a pod. In contrast to the container-local
filesystem, the data in volumes is preserved across container
restarts.
Volumes are attached to the containers in a pod and There are several types of volumes
You can see the layers of abstraction source
Master to Cluster communication
There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.
Also, you should check the CCM - The cloud controller manager (CCM) concept (not to be confused with the binary) was originally created to allow cloud specific vendor code and the Kubernetes core to evolve independent of one another. The cloud controller manager runs alongside other master components such as the Kubernetes controller manager, the API server, and scheduler. It can also be started as a Kubernetes addon, in which case it runs on top of Kubernetes.
Hope this answers all your questions related to Master accessing the data on Workers.
If you are still looking for more secure ways, check 11 Ways (Not) to Get Hacked
Short answer: yes the control plane can access all of your data.
Longer and more realistic answer: probably don't worry about it. It is far more likely that any successful attack against the control plane would be just as successful as if you were running it yourself. The exact internal details of GKE/AKS/EKS are a bit fuzzy, but all three providers have a lot of experience running multi-tenant systems and it wouldn't be negligent to trust that they have enough protections in place against lateral escalations between tenants on the control plane.

How can I deploy a service fabric cluster with nodes that span multiple locations?

I am thinking to create a service fabric cluster with nodes that span multiple locations, for example, one cluster that has nodes at eastus and westus2. Do you know how I can do it? Is there any ARM template examples? I saw MSDN document mentioned this in service fabric cluster disaster recovery. But nothing else useful I found out.
Thanks,
This is not officially supported at this time. The main problem is designating VM scale sets with their proper fault domains. You need to have a way to make sure the Stateful Service & Actors data is always replicated to the other region, so you can indeed do fail-over.

Does Kubernetes provision new VMs for pods on my cloud platform?

I'm currently learning about Kubernetes and still trying to figure it out. I get the general use of it but I think that there still plenty of things I'm missing, here's one of them. If I want to run Kubernetes on my public cloud, like GCE or AWS, will Kubernetes spin up new VMs by itself in order to make more compute for new pods that might be needed? Or will it only use a certain amount of VMs that were pre-configured as the compute pool. I heard Brendan say, in his talk in CoreOS fest, that Kubernetes sees the VMs as a "sea of compute" and the user doesn't have to worry about which VM is running which pod - I'm interested to know where that pool of compute comes from, is it configured when setting up Kubernetes? Or will it scale by itself and create new machines as needed?
I hope I managed to be coherent.
Thanks!
Kubernetes supports scaling, but not auto-scaling. The addition and removal of new pods (VMs) in a Kubernetes cluster is performed by replication controllers. The size of a replication controller can be changed by updating the replicas field. This can be performed in a couple ways:
Using kubectl, you can use the scale command.
Using the Kubernetes API, you can update your config with a new value in the replicas field.
Kubernetes has been designed for auto-scaling to be handled by an external auto-scaler. This is discussed in responsibilities of the replication controller in the Kubernetes docs.