Configure Microk8s - kubernetes

I am migrating from minikube to Microk8s and I want to change the configs of Microk8s and control the resources that it can use (cpu, memory, etc.).
In minikube we can use commands like below to set the amount of resources for minikube:
minikube config set memory 8192
minikube config set cpus 2
But I don't know how to do it in Microk8s. I used below commands (with and without sudo):
microk8s.config set cpus 4
microk8s.config set cpu 4
And they returned:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: VORCBDRVJUSUZJQ0FURS0tLS0...
server: https://10.203.101.163:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
username: admin
password: ...
But when I get the describe for that node I see that Microk8s is using 8 cpu:
Capacity:
cpu: 8
ephemeral-storage: 220173272Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32649924Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 219124696Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32547524Ki
pods: 110
How can I change the config of Microk8s?

You have a wrong understanding of the microk8s concept.
Unlike minikube, microk8s is not provisioning any VMs for you, it's running on you host machine, hence all resources of the host are allocated for microk8s.
So, in order to keep your cluster resource in borders, you have to manage it with k8s pod/container resource limits
Let's say, your host has 4 CPUs and you don't want your microk8s cluster to use more then half of it's capacity.
You will need to set below limits based on the number of running pods. For a single pod, it'll be like follows:
resources:
requests:
memory: "64Mi"
cpu: 2
limits:
memory: "128Mi"
cpu: 2

On OS/X ...
First stop multipass
sudo launchctl unload /Library/LaunchDaemons/com.canonical.multipassd.plist
Next edit the config file:
sudo su -
vi /var/root/Library/Application\ Support/multipassd/multipassd-vm-instances.json
Start multipassd again
sudo launchctl load /Library/LaunchDaemons/com.canonical.multipassd.plist
Source: https://github.com/canonical/multipass/issues/1158

Related

Node Allocation on kubernetes nodes

I am managing a Kubernetes cluster with 10 nodes(On-prem) and the node's configuration is not identical, 5 nodes are of 64 cores and 125G ram, and 5 nodes are of 64 cores and 256G ram.
Most of the time I keep getting alerts saying the node CPU/MEMORY is high and I see the pods are getting restarted, as it is consuming 92-95% of CPU and memory on certain nodes, I want to apply CPU and Memory Allocation on nodes so that the CPU utilization doesn't go very high.
I tried manually editing the node configuration but that did not work.
Any leads for this will be helpful!
In K8s, you can limit the resources usage for the pod containers and reserve a some cpus/memory for the container to avoid this problem:
---
apiVersion: v1
kind: Pod
metadata:
name: <pod name>
spec:
containers:
- name: c1
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: c2
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Found the Kubernetes document for setting the node level allocatable resource.
Fixed using the below document
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/

Why does DigitalOcean k8s node capacity shows subtracted value from node pool config?

I'm running a 4vCPU 8GB Node Pool, but all of my nodes report this for Capacity:
Capacity:
cpu: 4
ephemeral-storage: 165103360Ki
hugepages-2Mi: 0
memory: 8172516Ki
pods: 110
I'd expect it to show 8388608Ki (the equivalent of 8192Mi/8Gi).
How come?
Memory can be reserved for both system services (system-reserved) and the Kubelet itself (kube-reserved). https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ has details but DO is probably setting it up for you.

How to ensure the container runtime is nvidia-docker for the kubernetes node?

I need to check if the kubernetes node is configured correctly. Need to use nvidia-docker for one of the worker nodes.
Using: https://github.com/NVIDIA/k8s-device-plugin
How can I confirm that the configuration is correct for the device plugin?
$ kubectl describe node mynode
Roles: worker
Capacity:
cpu: 4
ephemeral-storage: 15716368Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 62710736Ki
nvidia.com/gpu: 1
pods: 110
Allocatable:
cpu: 3800m
ephemeral-storage: 14484204725
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 60511184Ki
nvidia.com/gpu: 1
pods: 110
System Info:
Machine ID: f32e0af35637b5dfcbedcb0a1de8dca1
System UUID: EC2A40D3-76A8-C574-0C9E-B9D571AA59E2
Boot ID: 9f2fa456-0214-4f7c-ac2a-2c62c2ef25a4
Kernel Version: 3.10.0-957.1.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.1
Kubelet Version: v1.11.2
Kube-Proxy Version: v1.11.2
However, I can see the nvidia.com/gpu under node resources, the question is: is the Container Runtime Version supposed to say nvidia-docker if the node is configured correctly? Currently, it shows docker which seems fishy, I guess!
Not sure if you did it already, but it seems to be clearly described:
After installing NVIDIA drivers and NVIDIA docker, you need to enable nvidia runtime on your node, by editing /etc/docker/daemon.json as specified here.
So as the instruction says, if you can see that runtimes is correct, you just need to edit that config.
Then deploy a DeamonSet (which is a way of ensuring that a pod runs on each node, with access to host network and devices):
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.11/nvidia-device-plugin.yml
Now your containers are ready to consume the GPU - as described here.

Minikube - how to increase ephemeral storage

I am trying to set up a copy of our app on my development machine using minikube. But I get an error showing up in minikube dashboard:
0/1 nodes are available: 1 Insufficient ephemeral-storage
Any ideas as to how I fix this?
The relevant part of the yaml configuration file looks like so:
resources:
requests:
memory: 500Mi
cpu: 1
ephemeral-storage: 16Gi
limits:
memory: 4Gi
cpu: 1
ephemeral-storage: 32Gi
I have tried assigning extra disk space at startup with the following but the error persists:
minikube start --disk-size 64g
The issue is that minikube can't resize the VM disk.
Depending on the type Hypervisor driver (xhyve, virtualbox, hyper-v) and disk type (qcow2, sparse, raw, etc.) resizing the VM disk will be different. For example, for if you have:
/Users/username/.minikube/machines/minikube/minikube.rawdisk
You can do something like this:
$ cd /Users/username/.minikube/machines/minikube
$ mv minikube.rawdisk minikube.img
$ hdiutil resize -size 64g minikube.img
$ mv minikube.img minikube.rawdisk
$ minikube start
$ minikube ssh
Then in the VM:
$ sudo resize2fs /dev/vda1 # <-- or the disk of your VM
Otherwise, if you don't care about the data in your VM:
$ rm -rf ~/.minikube
$ minikube start --disk-size 64g

Kubernetes MySQL pod getting killed due to memory issue

In my Kubernetes 1.11 cluster a MySQL pod is getting killed due to Out of memory issue:
> kernel: Out of memory: Kill process 8514 (mysqld) score 1011 or
> sacrifice child kernel: Killed process 8514 (mysqld)
> total-vm:2019624kB, anon-rss:392216kB, file-rss:0kB, shmem-rss:0kB
> kernel: java invoked oom-killer: gfp_mask=0x201da, order=0,
> oom_score_adj=828 kernel: java
> cpuset=dab20a22eebc2a23577c05d07fcb90116a4afa789050eb91f0b8c2747267d18e
> mems_allowed=0 kernel: CPU: 1 PID: 28667 Comm: java Kdump: loaded Not
> tainted 3.10.0-862.3.3.el7.x86_64 #1 kernel
My questions:
How to prevent that my pod gets OOM-killed? Is there a deployment setting I need to enable?
What is the configuration to prevent new pod getting scheduled on a node, when there is not enough memory available on said node?
We disabled the swap space. Do we need to disable memory overcommitting setting on the host level, setting /proc/sys/vm/overcommit_memory to 0?
Thanks
SR
When defining a Pod manifest it's a best practice to define resources section with limits and requests for CPU and memory:
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
This definition helps the scheduler identifying three Quality of Service (QoS) categories:
Guaranteed
Burstable
BestEffort
and pods in the last category are the most expendable.