Node Allocation on kubernetes nodes - kubernetes

I am managing a Kubernetes cluster with 10 nodes(On-prem) and the node's configuration is not identical, 5 nodes are of 64 cores and 125G ram, and 5 nodes are of 64 cores and 256G ram.
Most of the time I keep getting alerts saying the node CPU/MEMORY is high and I see the pods are getting restarted, as it is consuming 92-95% of CPU and memory on certain nodes, I want to apply CPU and Memory Allocation on nodes so that the CPU utilization doesn't go very high.
I tried manually editing the node configuration but that did not work.
Any leads for this will be helpful!

In K8s, you can limit the resources usage for the pod containers and reserve a some cpus/memory for the container to avoid this problem:
---
apiVersion: v1
kind: Pod
metadata:
name: <pod name>
spec:
containers:
- name: c1
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: c2
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Found the Kubernetes document for setting the node level allocatable resource.
Fixed using the below document
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/

Related

Why does DigitalOcean k8s node capacity shows subtracted value from node pool config?

I'm running a 4vCPU 8GB Node Pool, but all of my nodes report this for Capacity:
Capacity:
cpu: 4
ephemeral-storage: 165103360Ki
hugepages-2Mi: 0
memory: 8172516Ki
pods: 110
I'd expect it to show 8388608Ki (the equivalent of 8192Mi/8Gi).
How come?
Memory can be reserved for both system services (system-reserved) and the Kubelet itself (kube-reserved). https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ has details but DO is probably setting it up for you.

Kubernetes Pod CPU Resource Based on Instance Type

Suppose for a pod i define the resources as below
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 1000Mi
cpu: 1000m
This means i would be requiring minimum of 1/2 cpu core (or 1/2 vCPU). In cloud (AWS) we have different ec2 families. if we create a cluster using C4 or R4 types of instances does the performance change. Do we need to baseline the CPU usage based on the instance family on which we are going to run the pod.

filebeat :7.3.2 POD OOMKilled with 1Gi memory

I am running filebeat as deamon set with 1Gi memory setting. my pods getting crashed with OOMKilled status.
Here is my limit setting
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
What is the recommended memory setting to run the filebeat.
Thanks
The RAM usage of Filebeat is relative to how much it is doing, in general. You can limit the number of harvesters to try and reduce things, but overall you just need to run it uncapped and measure what the normal usage is for your use case and scenario.

Configure Microk8s

I am migrating from minikube to Microk8s and I want to change the configs of Microk8s and control the resources that it can use (cpu, memory, etc.).
In minikube we can use commands like below to set the amount of resources for minikube:
minikube config set memory 8192
minikube config set cpus 2
But I don't know how to do it in Microk8s. I used below commands (with and without sudo):
microk8s.config set cpus 4
microk8s.config set cpu 4
And they returned:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: VORCBDRVJUSUZJQ0FURS0tLS0...
server: https://10.203.101.163:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
username: admin
password: ...
But when I get the describe for that node I see that Microk8s is using 8 cpu:
Capacity:
cpu: 8
ephemeral-storage: 220173272Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32649924Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 219124696Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32547524Ki
pods: 110
How can I change the config of Microk8s?
You have a wrong understanding of the microk8s concept.
Unlike minikube, microk8s is not provisioning any VMs for you, it's running on you host machine, hence all resources of the host are allocated for microk8s.
So, in order to keep your cluster resource in borders, you have to manage it with k8s pod/container resource limits
Let's say, your host has 4 CPUs and you don't want your microk8s cluster to use more then half of it's capacity.
You will need to set below limits based on the number of running pods. For a single pod, it'll be like follows:
resources:
requests:
memory: "64Mi"
cpu: 2
limits:
memory: "128Mi"
cpu: 2
On OS/X ...
First stop multipass
sudo launchctl unload /Library/LaunchDaemons/com.canonical.multipassd.plist
Next edit the config file:
sudo su -
vi /var/root/Library/Application\ Support/multipassd/multipassd-vm-instances.json
Start multipassd again
sudo launchctl load /Library/LaunchDaemons/com.canonical.multipassd.plist
Source: https://github.com/canonical/multipass/issues/1158

Kubernetes set only container resources limits implies same values for resources requests

I have a pod with only one container that have this resources configuration:
resources:
limits:
cpu: 1000m
memory: 1000Mi
From the node where the pod is scheduled I read this:
CPU Requests CPU Limits Memory Requests Memory Limits
1 (50%) 1 (50%) 1000Mi (12%) 1000Mi (12%)
Why the "resources requests" are setted when I dont' want that?
Container’s request is set to match its limit regardless if there is a default memory request for the namespace.(Kubernetes Doc)