I'm trying to understand what the memory capacity means when running: kubectl describe node <mynode>
Snippet of the Result:
status:
capacity:
cpu: '16'
ephemeral-storage: 129886128Ki
hugepages-1Gi: '0'
hugepages-2Mi: '0'
memory: 65861880Ki
pods: '110'
allocatable:
cpu: 15820m
ephemeral-storage: '119703055367'
hugepages-1Gi: '0'
hugepages-2Mi: '0'
memory: 61361400Ki
pods: '110'
I know the memory under allocatable is a calculation based on the information we can find on resource reservations. My question is not about this.
Why is the capacity 65861880Ki while this is a VM with 64G of memory. I assume 64GiB and not GB but even then 65861880Ki is lower than 64GiB (67108864KiB) which is a difference of 1246984KiB (=1,19GiB or 1.3GB).
What am I missing?
As question states what is the difference?
kubespawner_override:
memory:
limit: 64G
guarantee: 64G
cpu:
limit: 16
guarantee: 16
--- or
kubespawner_override:
mem_limit: 64G
mem_guarantee: 64G
cpu_limit: 16
cpu_guarantee: 16
Set user memory limits
Spawner mem_guarantee
I want to set the single pods to ensure they use at least these values.
I'm running a 4vCPU 8GB Node Pool, but all of my nodes report this for Capacity:
Capacity:
cpu: 4
ephemeral-storage: 165103360Ki
hugepages-2Mi: 0
memory: 8172516Ki
pods: 110
I'd expect it to show 8388608Ki (the equivalent of 8192Mi/8Gi).
How come?
Memory can be reserved for both system services (system-reserved) and the Kubelet itself (kube-reserved). https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ has details but DO is probably setting it up for you.
Suppose for a pod i define the resources as below
resources:
requests:
memory: 500Mi
cpu: 500m
limits:
memory: 1000Mi
cpu: 1000m
This means i would be requiring minimum of 1/2 cpu core (or 1/2 vCPU). In cloud (AWS) we have different ec2 families. if we create a cluster using C4 or R4 types of instances does the performance change. Do we need to baseline the CPU usage based on the instance family on which we are going to run the pod.
I am a newbie and I may ask a stupid question, but I could not find answers on Kind or on stackoverflow, so I dare asking:
I run kind (Kubernestes-in-Docker) on a Ubuntu machine, with 32GB memory and 120 GB disk.
I need to run a Cassandra cluster on this Kind cluster, and each node needs at least 0.5 CPU and 1GB memory.
When I look at the node, it gives this:
Capacity:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 114336932Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32757588Ki
pods: 110
so in theory, there is more than enough resources to go. However, when I try to deploy the cassandra deployment, the first Pod keeps in a status 'Pending' because of a lack of resources. And indeed, the Node resources look like this:
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
The node does not get actually access to the available resources: it stays limited at 10% of a CPU and 50MB memory.
So, reading the exchange above and having read #887, I understand that I need to actually configure Docker on my host machine in order for Docker to allow the containers simulating the Kind nodes to grab more resources. But then... how can give such parameters to Kind so that they are taken into account when creating the cluster ?
\close
Sorry for this post: I finally found out that the issue was related to the storageclass not being properly configured in the spec of the Cassandra cluster, and not related to the dimensioning of the nodes.
I changed the cassandra-statefulset.yaml file to indicate the 'standard' storageclass: this storageclass is provisionned by default on a KinD cluster since version 0.7. And it works fine.
Since Cassandra is resource hungry, and depending on the machine, you may have to increase the timeout parameters so that the Pods would not be considered faulty during the deployment of the Cassandra cluster. I had to increase the timouts from respectively 15 and 5s, to 25 and 15s.
This topic should be closed.