Is Kubernetes high availability using kubeadm possible without failover/load balancer? - kubernetes

I am trying to achieve the k8s high availability using kubeadm. I am following the document k8s HA using kubeadm
In the official document, it is recommended to have the failover mechanism/load balancer for the kube-apiserver. I tried keepalived but, in case of setup on aws/gcp instaces, it lands in split brain situation as multicast is not supported and so I am not allowed to use it. Is there any way out for this?

Kubernetes is a container-orchestration system for automating deployment, scaling, and management of containerized applications.
Kubernetes play best in High Available and Load Balancing environments.
As #jaxxstorm mentioned, cloud providers give you a possibility to use native load balancers, and I also suggest it is
a good pole position with High Availability attempt. You may be interested in GCP documentation.
Kubeadm on Kubernetes homebrewed environment requires some additional work, and from my point of view, it is good
to set up Kubernetes The Hardway then starts to play with Kubeadm.
OK, I assume servers for the installation are ready. To create a not complex installation of multi-master cluster, you need 3 masters node (10.0.0.50-52) and Load Balancer (10.0.0.200).
Generate token and save the output to file:
kubeadm token generate
Create a kubeadm config file:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- "http://10.0.0.50:2379"
- "http://10.0.0.51:2379"
- "http://10.0.0.52:2379"
apiServerExtraArgs:
apiserver-count: "3"
apiServerCertSANs:
- "10.0.0.50"
- "10.0.0.51"
- "10.0.0.52"
- "10.0.0.200"
- "127.0.0.1"
token: "YOUR KUBEADM TOKEN"
tokenTTL: "0"
Copy the config file to all nodes.
Do initialization on the first master instance:
kubeadm init --config /path/to/config.yaml
The new master instance, will have all the certificates and keys necessary for our master cluster.
Copy directory structure /etc/kubernetes/pki to other masters to the same location.
On other master servers:
kubeadm init --config /path/to/config.yaml
Now let’s start to set up load balancer:
Copy /etc/kubernetes/admin.conf into $HOME/.kube/config
next, edit $HOME/.config and replace
server:10.0.0.50
with
server:10.0.0.200
Check if nodes are working fine:
kubectl get nodes
On all workers execute:
kubeadm join --token YOUR_CLUSTER_TOKEN 10.0.0.200:6443 --discovery-token-ca-cert-hash sha256:89870e4215b92262c5093b3f4f6d57be8580c3442ed6c8b00b0b30822c41e5b3
And that’s it! If everything was setup cleanly, you should now have a highly available cluster.
I found "HA Kubernetes cluster via Kubeadm" tutorial useful, thank you #Nate Baker for inspiration.

No, you need a loadbalancer to have HA with kubeadm.
If you're using AWS/GCP, why not consider using the native loadbalancers for those environments, like ELB or a Google Cloud Load Balancer?

You definnitely need nginx/haproxy + keepalived for failover and High availability

Related

How to set node allocatable computation on kubernetes?

I'm reading the Reserve Compute Resources for System Daemons task in Kubernetes docs and it briefly explains how to allocate a compute resource to a node using kubelet command and flags --kube-reserved, --system-reserved and --eviction-hard.
I'm learning on Minikube for masOS and as far as I got, minikube is configured to use command kubectl along with minikube command.
For local learning purposes on minikube I don't need to have it set (maybe it can't be done on minikube) but
How this could be done let's say in K8's development environment on a node?
This could be be done by:
1. Passing config file during cluster initialization or initilize kubelet with additional parameters via config file,
For cluster initialization using config file it should contains at least:
kind: InitConfiguration
kind: ClusterConfiguration
additional configuratuion types like:
kind: KubeletConfiguration
In order to get basic config file you can use kubeadm config print init-defaults
2. For the live cluster please consider reconfigure current cluster using steps "Generate the configuration file" and "Push the configuration file to the control plane" like described in "Reconfigure a Node's Kubelet in a Live Cluster"
3. I didn't test it but for minikube - please take a look here:
Note:
Minikube has a “configurator” feature that allows users to configure the Kubernetes components with arbitrary values. To use this feature, you can use the --extra-config flag on the minikube start command.
This flag is repeated, so you can pass it several times with several different values to set multiple options.
This flag takes a string of the form component.key=value, where component is one of the strings from the below list, key is a value on the configuration struct and value is the value to set.
Valid keys can be found by examining the documentation for the Kubernetes componentconfigs for each component. Here is the documentation for each supported configuration:
kubelet
apiserver
proxy
controller-manager
etcd
scheduler
Hope this helped:
Additional community resources:
Memory usage in kubernetes cluster

Change proxy mode in kubernetes

I have installed kubernetes cluster using this tutorial on ubuntu 16.
everything works but I need change proxy mode (ipvs) and I don't know how can I change kube-proxy mode using kubectl or something else.
kubectl is more for managing the kubernetes workload. You need to modify the control plane itself. Since you created your cluster with kubeadm, you can use that to enable ipvs. You'd add this to your config file for kube init.
...
kubeProxy:
config:
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
...
Here's an article from github.com/kubernetes with more detailed instructions. Depending on your kubernetes version, you can pass it as a flag to kube init instead of using the above configuration.
Edit: Here's a link on how to use kubeadm to edit an existing cluster: How to use kubeadm upgrade to change some features in kubeadm-config

How to change --horizontal-pod-autoscaler-sync-period field in kube-controller-manager to 5sec in gke

I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager.
In the below link it says there is a possibility of changing the flags:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Is there any proper implementation steps to this?
You are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ directory on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and other clusters manages only by their providers without getting you permissions to have access to master nodes.
But you can create cluster with kubeadm init and configure/change as you like.
you can stop your minikube cluster and start it with your extra configs ...
minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=5s'
for more details, you can go through https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults

Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"

I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
Environment:
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.71
kworker1
-------------------------------------------
network:
Adapter1: NAT
Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
[Setup kubernetes with kubeadm]
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f \
https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial
https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
My kubesystem pods status as below:
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
You guys are right, I have tested with digital ocean's droplet and it works as expected, I hit another error is "forbidden, user service account not permitted". Look like the pods is able to access the kubernetes api already. I also tested install istio which I encountered the same issue before, and now it worked in digital ocean droplet.
Thank you guys.

Can't run Kubernetes dashboard after installing Kubernetes cluster on rancher/server

Docker: 1.12.6
rancher/server: 1.5.10
rancher/agent: 1.2.2
Tried two ways to install Kubernetes cluster on rancher/server.
Method 1: Use Kubernetes environment
Infrastructure/Hosts
Agent hosts disconnected sometimes.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Method 2: Use Default environment
Infrastructure/Hosts
Set some labels to rancher server and agent hosts.
Stacks
All green except kubernetes-ingress-lbs. It has 0 containers.
Both of them have this issue: kubernetes-ingress-lbs 0 services 0 containers. Then can't access Kubernetes dashboard.
Why didn't been installed by rancher?
And, is it necessary to add those labels for Kubernetes cluster?
Here is RIGHT Kubernetes Cluster deployed on Rancher server:
Turning on the Show System, you can find the service of kubernetes-dashboard under the namespace of kube-system.
Well, by using the version of kubernetes is v1.5.4, you should prepare in advance to pull the below Docker Images:
By reading rancher/catalog and rancher/kuberetes-package, you can know and even modify the config files(like docker-compose.yml, rancher-compose.yml and so on) by yourself.
When you enable to "Show System" containers in the UI, you should be able to see the dashboard container running under Namespace: kube-system. If this container is not running then the dashboard will not be able to load.
You might have to enable kubernetes add-on service within rancher environment template.
manage environments >> edit kubernetes default template >> enable add-on service and save the new template with the preferred name.
Now launch the cluster using customized templates.