Kubernetes Pod CPU Resource Based on Instance Type - kubernetes

Suppose for a pod i define the resources as below
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 1000Mi
cpu: 1000m
This means i would be requiring minimum of 1/2 cpu core (or 1/2 vCPU). In cloud (AWS) we have different ec2 families. if we create a cluster using C4 or R4 types of instances does the performance change. Do we need to baseline the CPU usage based on the instance family on which we are going to run the pod.

Related

Node Allocation on kubernetes nodes

I am managing a Kubernetes cluster with 10 nodes(On-prem) and the node's configuration is not identical, 5 nodes are of 64 cores and 125G ram, and 5 nodes are of 64 cores and 256G ram.
Most of the time I keep getting alerts saying the node CPU/MEMORY is high and I see the pods are getting restarted, as it is consuming 92-95% of CPU and memory on certain nodes, I want to apply CPU and Memory Allocation on nodes so that the CPU utilization doesn't go very high.
I tried manually editing the node configuration but that did not work.
Any leads for this will be helpful!
In K8s, you can limit the resources usage for the pod containers and reserve a some cpus/memory for the container to avoid this problem:
---
apiVersion: v1
kind: Pod
metadata:
name: <pod name>
spec:
containers:
- name: c1
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: c2
image: ...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Found the Kubernetes document for setting the node level allocatable resource.
Fixed using the below document
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable
https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/

Why does DigitalOcean k8s node capacity shows subtracted value from node pool config?

I'm running a 4vCPU 8GB Node Pool, but all of my nodes report this for Capacity:
Capacity:
cpu: 4
ephemeral-storage: 165103360Ki
hugepages-2Mi: 0
memory: 8172516Ki
pods: 110
I'd expect it to show 8388608Ki (the equivalent of 8192Mi/8Gi).
How come?
Memory can be reserved for both system services (system-reserved) and the Kubelet itself (kube-reserved). https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ has details but DO is probably setting it up for you.

filebeat :7.3.2 POD OOMKilled with 1Gi memory

I am running filebeat as deamon set with 1Gi memory setting. my pods getting crashed with OOMKilled status.
Here is my limit setting
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
What is the recommended memory setting to run the filebeat.
Thanks
The RAM usage of Filebeat is relative to how much it is doing, in general. You can limit the number of harvesters to try and reduce things, but overall you just need to run it uncapped and measure what the normal usage is for your use case and scenario.

Kubernetes MySQL pod getting killed due to memory issue

In my Kubernetes 1.11 cluster a MySQL pod is getting killed due to Out of memory issue:
> kernel: Out of memory: Kill process 8514 (mysqld) score 1011 or
> sacrifice child kernel: Killed process 8514 (mysqld)
> total-vm:2019624kB, anon-rss:392216kB, file-rss:0kB, shmem-rss:0kB
> kernel: java invoked oom-killer: gfp_mask=0x201da, order=0,
> oom_score_adj=828 kernel: java
> cpuset=dab20a22eebc2a23577c05d07fcb90116a4afa789050eb91f0b8c2747267d18e
> mems_allowed=0 kernel: CPU: 1 PID: 28667 Comm: java Kdump: loaded Not
> tainted 3.10.0-862.3.3.el7.x86_64 #1 kernel
My questions:
How to prevent that my pod gets OOM-killed? Is there a deployment setting I need to enable?
What is the configuration to prevent new pod getting scheduled on a node, when there is not enough memory available on said node?
We disabled the swap space. Do we need to disable memory overcommitting setting on the host level, setting /proc/sys/vm/overcommit_memory to 0?
Thanks
SR
When defining a Pod manifest it's a best practice to define resources section with limits and requests for CPU and memory:
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
This definition helps the scheduler identifying three Quality of Service (QoS) categories:
Guaranteed
Burstable
BestEffort
and pods in the last category are the most expendable.

Kubernetes set only container resources limits implies same values for resources requests

I have a pod with only one container that have this resources configuration:
resources:
limits:
cpu: 1000m
memory: 1000Mi
From the node where the pod is scheduled I read this:
CPU Requests CPU Limits Memory Requests Memory Limits
1 (50%) 1 (50%) 1000Mi (12%) 1000Mi (12%)
Why the "resources requests" are setted when I dont' want that?
Container’s request is set to match its limit regardless if there is a default memory request for the namespace.(Kubernetes Doc)