System hangs after running master cluster kubernetes - kubernetes

Whenever I try to run the Kubernetes cluster from the master’s machine using the below command, my system hangs and gets very slow and I even cant open explorer, or firefox. I have two VM with network configuration NAT and Host-only adapter.
kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=192.168.0.0/16
Any help is highly appreciated

As was discussed in comments, the root cause of the issue was not enough resources available in this VirtualMachine.
In Kubeadm documentation, minimal requirements are:
To follow this guide, you need:
One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
2 GiB or more of RAM per machine--any less leaves little room for your apps.
At least 2 CPUs on the machine that you use as a control-plane node.
Full network connectivity among all machines in the cluster. You can use either a public or a private network.
OP's VM had set RAM to 2GB which casued performance issue using different applications.
When RAM on the VM was set to 4GB, Virtual Machine is working perfectly fine.

Related

Rancher: High CPU utilization even with zero clusters under management

I am seeing a continuous 8 to 15% CPU usage on Rancher related processes while there is not a single cluster being managed by it. Nor is any user interacting with. What explains this high CPU usage when idle? Also, there are several "rancher-agent" containers perpetually running and restarting. Which does not look right. There is no Kubernetes cluster running on this machine. This machine (unless Rancher is creating its own single node cluster for whatever reason).
I am using Rancher 2.3
docker stats:
docker ps:
htop:
I'm not sure I would call 15% "high", but Kubernetes has a lot of ongoing stuff even if it looks like the cluster is entirely quiet. Stuff like processing node heartbeats, etcd election traffic, controllers with time-based conditions which have to be processed. K3s probably streamlines that a bit, but 0% CPU usage is not a design goal even in the fork.
Rancher (2.3.x) does not do anything involving k3s. These pictures are not "just Rancher".
k3s is separately installed and running.
The agents further suggest that this node is added to a cluster (maybe the same Rancher running on it, maybe not).
It restarting all the time is not helping CPU usage, especially if it is registered to that local Rancher instance.
Also you're running a completely random commit from head instead of an actual release.
FWIW...In my case, I built the raspberry pi based Rancher/k3 lab as designed by Network Chuck on youtube. The VM on my linux host that runs Rancher will start off fairly quiet, then over the course of a couple of days the rancherd process will consistently hit near 100% cpu usage (I gave it 3 vcpu's) and stay there, even though I have no pods running on either the pi cluster or the local Rancher VM cluster. A reboot starts the process over, but within a few days its back to 100% cpu usage.
On writing this I just noticed that due to a DHCP issue, my original external ip for the local rancher cluster node got changed from 163 to 151 (I reserved it in pihole to 151, just never updated rancher config). Just fixed it in the Rancher gui, we'll see if that clears up some of the errors I saw in the logs and keeps the CPU usage normal on idle.

Run k8s on single node without minikube

is it possible to run k8s on single node without using minikube? Today I use kubeadm with 2 hosts, but I would like to know if it is possible to run using only one host.
You can run kubeadm init command to initialize single node cluster. You can add/remove nodes to the cluster.
taint the master so that it can run containers using the below command
kubectl taint nodes --all node-role.kubernetes.io/master-
You need to look into the hardware requirements for running a single node cluster. You would need to run
etcd which is the backing store for all cluster data.
Control plane software(scheduler, controller manager, api-server, kubeadm)
Worker node software(kubectl, kube-proxy)
all on one node.
When installing kube-adm I see the hardware requirements(https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/) as
2 GB or more of RAM per machine (any less will leave little room for your apps)
and 2 CPUs or more
Example configurations for etcd (https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations).
For the CKA exam training material the recommended node setting for a single machine is 2 vcpu's and 7.5 GB memory with a note of caution that you may experience slowness.
I am going by Ubuntu 18.04 Linux for my recommendations. Another thing you need to do is disable the swap(https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes). It is necessary since kubernetes makes maximum use of disk and cpu resources provided.
So if it for your learning. Go ahead and start with 2 vcpu's and 7.5 GB memory.
You could check
k3s
KinD
MicroK8s
for single-node Kubernetes installations.

Minikube VM Driver: None vs Virtualbox/KVM

What are the differences in running Minikube with a VM hypervisor (VirtualBox/HVM) vs none?
I am not asking whether or not Minikube can run without a hypervisor. I know that running on '--vm-driver=none' is possible and it runs on the local machine and requires Docker be installed.
I am asking what is the performance differences. There is not a lot of documentation on how '--vm-driver=none' works. I am wondering would running without the VM affect the functionality of Minikube.
This is how I explain it to myself:
driver!=none mode
In this case minikube provisions a new docker-machine (Docker daemon/Docker host) using any supported providers. For instance:
a) local provider = your Windows/Mac local host: it frequently uses VirtualBox as a hypervisor, and creates inside it a VM based on boot2docker image (configurable). In this case k8s bootstraper (kubeadm) creates all Kubernetes components inside this isolated VM. In this setup you have usually two docker daemons, your local one for development (if you installed it prior), and one running inside minikube VM.
b) cloud hosts - not supported by minikube
driver=none mode
In this mode, your local docker host is re-used.
In case no.1 there will be a performance penalty, because each VM generates some overhead, by running several system processes required by VM itself, in addition to those required by k8s components running inside VM. I think driver-mode=none is similar to "kind" version of k8s boostraper, meant for doing CI/integration tests.

Kubernetes in Vsphere Virtual machines

Dears,
Sorry may be a basic question for some of you. If i have a Vsphere Environment and i am allowed to access only 2 Virtual machines inside them. Can I set kubernetes cluster with 1 VM as master and 1 VM as Minion without interacting with the hypervisor or the Vsphere center ?
In this case what are the requirements
I already set up an environment in my Laptop but i should define a host only network in Virtualbox and define the machines also for the host ? should that be the same in case of Vsphere ?
There are some requirements for Kubernetes cluster. According to the official documentation it is necessary to have:
One or more machines running one of:
Ubuntu 16.04+
Debian 9
CentOS 7
RHEL 7
Fedora 25/26 (best-effort)
HypriotOS v1.0.1+
Container Linux (tested with 1800.6.0)
2 GB or more of RAM per machine (any less will leave little room for your apps)
2 CPUs or more
Full network connectivity between all machines in the cluster (public or private network is fine)
Unique hostname, MAC address, and product_uuid for every node. See here for more details.
Certain ports are open on your machines. See here for more details.
Swap disabled. You MUST disable swap in order for the kubelet to work properly.
Also, IP subnets for Services and for Pods must not interfere with IP subnets in the same VPC.
To set up Kubernetes cluster it is enough to have SSH access to VMs. Additional network interfaces are not required.
If you already have VMs, the most convenient tool for cluster creation is kubeadm. Please, consider reading the following part of official documentation:
Creating a single master cluster with kubeadm

kubernetes network performance issue: moving service from physical machine to kubernetes get half rps drop

I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed.
As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot.
When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/
Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker?
You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.