Why does DigitalOcean k8s node capacity shows subtracted value from node pool config? - kubernetes

I'm running a 4vCPU 8GB Node Pool, but all of my nodes report this for Capacity:
Capacity:
cpu: 4
ephemeral-storage: 165103360Ki
hugepages-2Mi: 0
memory: 8172516Ki
pods: 110
I'd expect it to show 8388608Ki (the equivalent of 8192Mi/8Gi).
How come?

Memory can be reserved for both system services (system-reserved) and the Kubelet itself (kube-reserved). https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ has details but DO is probably setting it up for you.

Related

Kubernetes Pod CPU Resource Based on Instance Type

Suppose for a pod i define the resources as below
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 1000Mi
cpu: 1000m
This means i would be requiring minimum of 1/2 cpu core (or 1/2 vCPU). In cloud (AWS) we have different ec2 families. if we create a cluster using C4 or R4 types of instances does the performance change. Do we need to baseline the CPU usage based on the instance family on which we are going to run the pod.

How can I dimension the Nodes (cpu, memory) in a Kind Cluster?

I am a newbie and I may ask a stupid question, but I could not find answers on Kind or on stackoverflow, so I dare asking:
I run kind (Kubernestes-in-Docker) on a Ubuntu machine, with 32GB memory and 120 GB disk.
I need to run a Cassandra cluster on this Kind cluster, and each node needs at least 0.5 CPU and 1GB memory.
When I look at the node, it gives this:
Capacity:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
so in theory, there is more than enough resources to go. However, when I try to deploy the cassandra deployment, the first Pod keeps in a status 'Pending' because of a lack of resources. And indeed, the Node resources look like this:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
The node does not get actually access to the available resources: it stays limited at 10% of a CPU and 50MB memory.
So, reading the exchange above and having read #887, I understand that I need to actually configure Docker on my host machine in order for Docker to allow the containers simulating the Kind nodes to grab more resources. But then... how can give such parameters to Kind so that they are taken into account when creating the cluster ?
\close
Sorry for this post: I finally found out that the issue was related to the storageclass not being properly configured in the spec of the Cassandra cluster, and not related to the dimensioning of the nodes.
I changed the cassandra-statefulset.yaml file to indicate the 'standard' storageclass: this storageclass is provisionned by default on a KinD cluster since version 0.7. And it works fine.
Since Cassandra is resource hungry, and depending on the machine, you may have to increase the timeout parameters so that the Pods would not be considered faulty during the deployment of the Cassandra cluster. I had to increase the timouts from respectively 15 and 5s, to 25 and 15s.
This topic should be closed.

What is the actual node's CPU and memory capacity here?

I would like to know, what are the actual memory and CPU capacity in mi and m in the following results:
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2050168Ki
pods: 20
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2050168Ki
pods: 20
2 CPUs (2 cores) and 2050168Kb of RAM (more simply, 2GB). Which also happens to be the Minikube defaults.

Kubernetes MySQL pod getting killed due to memory issue

In my Kubernetes 1.11 cluster a MySQL pod is getting killed due to Out of memory issue:
> kernel: Out of memory: Kill process 8514 (mysqld) score 1011 or
> sacrifice child kernel: Killed process 8514 (mysqld)
> total-vm:2019624kB, anon-rss:392216kB, file-rss:0kB, shmem-rss:0kB
> kernel: java invoked oom-killer: gfp_mask=0x201da, order=0,
> oom_score_adj=828 kernel: java
> cpuset=dab20a22eebc2a23577c05d07fcb90116a4afa789050eb91f0b8c2747267d18e
> mems_allowed=0 kernel: CPU: 1 PID: 28667 Comm: java Kdump: loaded Not
> tainted 3.10.0-862.3.3.el7.x86_64 #1 kernel
My questions:
How to prevent that my pod gets OOM-killed? Is there a deployment setting I need to enable?
What is the configuration to prevent new pod getting scheduled on a node, when there is not enough memory available on said node?
We disabled the swap space. Do we need to disable memory overcommitting setting on the host level, setting /proc/sys/vm/overcommit_memory to 0?
Thanks
SR
When defining a Pod manifest it's a best practice to define resources section with limits and requests for CPU and memory:
resources:
limits:
cpu: "1"
memory: 512Mi
requests:
cpu: 500m
memory: 256Mi
This definition helps the scheduler identifying three Quality of Service (QoS) categories:
Guaranteed
Burstable
BestEffort
and pods in the last category are the most expendable.

Kubernetes set only container resources limits implies same values for resources requests

I have a pod with only one container that have this resources configuration:
resources:
limits:
cpu: 1000m
memory: 1000Mi
From the node where the pod is scheduled I read this:
CPU Requests CPU Limits Memory Requests Memory Limits
1 (50%) 1 (50%) 1000Mi (12%) 1000Mi (12%)
Why the "resources requests" are setted when I dont' want that?
Container’s request is set to match its limit regardless if there is a default memory request for the namespace.(Kubernetes Doc)