Kubernetes -> Memory consumption per namespace - kubernetes

Is there a way to get the memory consumption per namespace on Kubernetes?

On high level we can get this from kubectl
$ kubectl describe resourcequota -n my-namespace
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 12 48
limits.memory 1024M 120Gi
requests.cpu 250m 24
requests.memory 512M 60Gi
Note : will work only if your create resourcequota.

It's possible creating a resourcequota object like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
However there are some pre requisites in order to check the pods consumption:
Every Container must have a memory request, memory limit, cpu
request, and cpu limit.
The memory request total for all Containers must not exceed 1 GiB.
The memory limit total for all Containers must not exceed 2 GiB.
The CPU request total for all Containers must not exceed 1 cpu.
The CPU limit total for all Containers must not exceed 2 cpu.
Pod example template
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
To check the resource consumption use the following command:
kubectl --context <cluster_context> describe resourcequota -n my-namespace
Source:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

Related

Kubernetes Limit Ranges Override

Let's say I set the following Limit Ranges to namespace X:
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range
spec:
limits:
- default:
memory: 1Gi
cpu: 0.5
defaultRequest:
memory: 256Mi
cpu: 0.2
type: Container
These limits are sufficient for most pods in namespace X. Some pods need more resources but a pod requesting more than default.memory, default.cpu will be rejected.
My question is, is there any way (in manifest or otherwise) to override these limits such that the pod can request more than the limit set to the namespace? I know it kinds beats the purpose of Limit Ranges but I'm still wondering if there's a way to do it.
In your example, you do not limit your memory/cpu to a minimum/maximum of memory/cpu. You only set "defaults" to every Pod which is created. With your given LimitRange, you can still override custom Limits/Requests in the Deployment of your Pod.
If you would like to set a minimum/maximum you have to add something like this to your LimitRange:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
min:
cpu: "200m"
type: Container
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/#create-a-limitrange-and-a-pod

Resource Quota applied before LimitRanger in Kubernetes for Pod without specified limits

While using Kubernetes v1.16.8 both the ResourceQuota and LimitRanger are enabled by default and I did not have to add them in my admission plugin in kube-apiserver.
In my case, I use the following LimitRanger
apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
and it adds the default limit for memory usage in a new Pod without specified limits, as expected.
The Pod's definition is as simple as possible:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod-ctr
image: redis
When I get the created pod described it has acquired the value for limit from the LimitRanger.
Everything is fine!
The problem occurs when i try to enforce a resourcequota for the namespace.
The ResourceQuota looks like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
When I delete and recreate the pod it will not be created.
The resourcequota will result in the following error:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
In other words, the resourcequota is applied before LimitRanger so it does not let me create pods without a specified limit.
Is there a way to enforce LimitRanger first and then the ResourceQuota?
How do you apply them to your namespaces?
I would like to have developers that do not specify limits in the pod definition to be able to acquire the defaults while enforcing the resource quota as well.
TL;DR:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
You didn't set a default limit for CPU, according to ResourceQuota Docs:
If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation.
This is why the pod is not being created. Add a cpu-limit.yaml:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
The limitRanger injects the defaults at container runtime, and yes, it injects the default values prior to the ResourceQuota validation.
Other minor issue that I found, is that not all your yamls contains the namespace: test line under metadata, that's important to assign the resources to the right namespace, I fixed it on the example below.
Reproduction:
Created namespace, applied first the mem-limit and quota, as you mentioned:
$ kubectl create namespace test
namespace/test created
$ cat mem-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: test
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
$ kubectl apply -f mem-limit.yaml
limitrange/mem-limit-range created
$ kubectl apply -f quota.yaml
resourcequota/mem-cpu-demo created
$ kubectl describe resourcequota -n test
Name: mem-cpu-demo
Namespace: test
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
$ kubectl describe limits -n test
Name: mem-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Now if I try to create the pod:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod-ctr
image: redis
$ kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Same error you faced, because there is no default limits for CPU set. We'll create and apply it:
$ cat cpu-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
$ kubectl apply -f cpu-limit.yaml
limitrange/cpu-limit-range created
$ kubectl describe limits cpu-limit-range -n test
Name: cpu-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
Now with the cpu limitRange in action, let's create the pod and inspect it:
$ kubectl apply -f pod.yaml
pod/test-pod created
$ kubectl describe pod test-pod -n test
Name: test-pod
Namespace: test
Status: Running
...{{Suppressed output}}...
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 500m
memory: 256Mi
Our pod was created with the enforced limitRange.
If you have any question let me know in the comments.
The error clearly defines how you are supposed to handle the issue.
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Your LimitRange object defines default memory, but not CPU. Your quota restricts both memory and CPU. So you must request for CPU and memory in your Pod manifest. The LimitRange takes care of your default memory, but there is no default CPU request. So in that case, either you must add CPU request in Pod manifest or add default CPU request in your LimitRange.

Kubernetes Issue with pod creation with resourcequota and limitrange

I'm having trouble in creating the pod using ResourceQuota and LimitRange.
The ResourceQuota has limit cpu=2,memory=2Gi & requests cpu=1,memory=1Gi defined for CPU & memory
The LimitRange has default requests and default limits, both having cpu=1,memory=1Gi which is within what is defined in the ResourceQuota .
While creating the pod using only limits(cpu=2,memory=2Gi) without requests(cpu,memory), it is failing with
forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
but as per the default request defined in LimitRange it is cpu=1,memory=1Gi not sure from where it is taking requests.cpu=2,requests.memory=2Gi
As I understand while creating the pod if resource requests is not mentioned, it should take it from LimitRange default requests which is within the range, not sure why it is failing.
please help here
cloud_user#master-node:~$ k describe limitrange default-limitrange
Name: default-limitrange
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 1Gi 1Gi -
Container cpu - - 1 1 -
cloud_user#master-node:~$ k describe resourcequota compute-resources
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 2
requests.cpu 0 1
requests.memory 0 1Gi
cloud_user#master-node:~$ k run nginx --image=nginx --restart=Never --limits=cpu=2,memory=2Gi
Error from server (Forbidden): pods "nginx" is forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
Here I'm adding yaml file for LimitRange, ResourceQuota
apiVersion: v1
kind: LimitRange
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"LimitRange","metadata":{"annotations":{},"name":"default-limitrange","namespace":"default"},"spec":{"limits":[{"defaultRequest":{"cpu":"1","memory":"1Gi"},"type":"Container"}]}}
creationTimestamp: "2020-03-28T08:05:40Z"
name: default-limitrange
namespace: default
resourceVersion: "4966600"
selfLink: /api/v1/namespaces/default/limitranges/default-limitrange
uid: 3261f4d9-6339-478d-939c-395010b20aad
spec:
limits:
- default:
cpu: "1"
memory: 1Gi
defaultRequest:
cpu: "1"
memory: 1Gi
type: Container
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2020-03-28T07:40:03Z"
name: compute-resources
namespace: default
resourceVersion: "4967263"
selfLink: /api/v1/namespaces/default/resourcequotas/compute-resources
uid: 8a94a396-0774-4b62-8140-5a5f463935ed
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: "0"
limits.memory: "0"
pods: "0"
requests.cpu: "0"
requests.memory: "0"
This is documented here.If you specify a container’s limit, but not its request the container is not assigned the default memory request as per the limit range, rather the container’s memory request is set to match its memory limit specified while creating the pod. This is the reason why
requests.cpu=2,requests.memory=2Gi is being set which matches with the limit specified while creating the pod.

Difference between "cpu" and "requests.cpu"

I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.

Why memory usage is greater than what I set in Kubernetes's node?

I allocated resource to 1 pod only with 650MB/30% of memory (with other built-in pods, limit memory is 69% only)
However, when the pod handling process, the usage of pod is within 650MB but overall usage of node is 94%.
Why does it happen because it supposed to have upper limit of 69%? Is it due to other built-in pods which did not set limit? How to prevent this as sometimes my pod with error if usage of Memory > 100%?
My allocation setting (kubectl describe nodes):
Memory usage of Kubernetes Node and Pod when idle:
kubectl top nodes
kubectl top pods
Memory usage of Kubernetes Node and Pod when running task:
kubectl top nodes
kubectl top pods
Further Tested behaviour:
1. Prepare deployment, pods and service under namespace test-ns
2. Since only kube-system and test-ns have pods, so assign 1000Mi to each of them (from kubectl describe nodes) aimed to less than 2GB
3. Suppose memory used in kube-system and test-ns will be less than 2GB which is less than 100%, why memory usage can be 106%?
In .yaml file:
apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: test-ns
spec:
limits:
- default:
memory: 1000Mi
type: Container
---
apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: kube-system
spec:
limits:
- default:
memory: 1000Mi
type: Container
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-deployment
namespace: test-ns
labels:
app: devops-pdf
spec:
selector:
matchLabels:
app: devops-pdf
replicas: 2
template:
metadata:
labels:
app: devops-pdf
spec:
containers:
- name: devops-pdf
image: dev.azurecr.io/devops-pdf:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
resources:
requests:
cpu: 600m
memory: 500Mi
limits:
cpu: 600m
memory: 500Mi
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: devops-pdf
namespace: test-ns
spec:
type: LoadBalancer
ports:
- port: 8007
selector:
app: devops-pdf
This effect is most likely caused by the 4 Pods that run on that node without a memory limit specified, shown as 0 (0%). Of course 0 doesn't mean it can't use even a single byte of memory as no program can be started without using memory; instead it means that there is no limit, it can use as much as available. Also programs running not in pod (ssh, cron, ...) are included in the total used figure, but are not limited by kubernetes (by cgroups).
Now kubernetes sets up the kernel oom adjustment values in a tricky way to favour containers that are under their memory request, making it more more likely to kill processes in containers that are between their memory request and limit, and making it most likely to kill processes in containers with no memory limits. However, this is only shown to work fairly in the long run, and sometimes the kernel can kill your favourite process in your favourite container that is behaving well (using less than its memory request). See https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior
The pods without memory limit in this particular case are coming from the aks system itself, so setting their memory limit in the pod templates is not an option as there is a reconciler that will restore it (eventually). To remedy the situation I suggest that you create a LimitRange object in the kube-system namespace that will assign a memory limit to all pods without a limit (as they are created):
apiVersion: v1
kind: LimitRange
metadata:
name: default-mem-limit
namespace: kube-system
spec:
limits:
- default:
memory: 150Mi
type: Container
(You will need to delete the already existing Pods without a memory limit for this to take effect; they will get recreated)
This is not going to completely eliminate the problem as you might end up with an overcommitted node; however the memory usage will make sense and the oom events will be more predictable.