Kubernetes Issue with pod creation with resourcequota and limitrange - kubernetes

I'm having trouble in creating the pod using ResourceQuota and LimitRange.
The ResourceQuota has limit cpu=2,memory=2Gi & requests cpu=1,memory=1Gi defined for CPU & memory
The LimitRange has default requests and default limits, both having cpu=1,memory=1Gi which is within what is defined in the ResourceQuota .
While creating the pod using only limits(cpu=2,memory=2Gi) without requests(cpu,memory), it is failing with
forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
but as per the default request defined in LimitRange it is cpu=1,memory=1Gi not sure from where it is taking requests.cpu=2,requests.memory=2Gi
As I understand while creating the pod if resource requests is not mentioned, it should take it from LimitRange default requests which is within the range, not sure why it is failing.
please help here
cloud_user#master-node:~$ k describe limitrange default-limitrange
Name: default-limitrange
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 1Gi 1Gi -
Container cpu - - 1 1 -
cloud_user#master-node:~$ k describe resourcequota compute-resources
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 2
requests.cpu 0 1
requests.memory 0 1Gi
cloud_user#master-node:~$ k run nginx --image=nginx --restart=Never --limits=cpu=2,memory=2Gi
Error from server (Forbidden): pods "nginx" is forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
Here I'm adding yaml file for LimitRange, ResourceQuota
apiVersion: v1
kind: LimitRange
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"LimitRange","metadata":{"annotations":{},"name":"default-limitrange","namespace":"default"},"spec":{"limits":[{"defaultRequest":{"cpu":"1","memory":"1Gi"},"type":"Container"}]}}
creationTimestamp: "2020-03-28T08:05:40Z"
name: default-limitrange
namespace: default
resourceVersion: "4966600"
selfLink: /api/v1/namespaces/default/limitranges/default-limitrange
uid: 3261f4d9-6339-478d-939c-395010b20aad
spec:
limits:
- default:
cpu: "1"
memory: 1Gi
defaultRequest:
cpu: "1"
memory: 1Gi
type: Container
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2020-03-28T07:40:03Z"
name: compute-resources
namespace: default
resourceVersion: "4967263"
selfLink: /api/v1/namespaces/default/resourcequotas/compute-resources
uid: 8a94a396-0774-4b62-8140-5a5f463935ed
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: "0"
limits.memory: "0"
pods: "0"
requests.cpu: "0"
requests.memory: "0"

This is documented here.If you specify a container’s limit, but not its request the container is not assigned the default memory request as per the limit range, rather the container’s memory request is set to match its memory limit specified while creating the pod. This is the reason why
requests.cpu=2,requests.memory=2Gi is being set which matches with the limit specified while creating the pod.

Related

OpenShift memory quota never released

Just upgraded my cluster from OpenShift 4.8 to 4.9 and now, after receiving resource (request/limits) for scheduled pods, the resources are never released upon completion. For example, i have a 4.1G memory ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 4.1G
used:
cpu: "0"
memory: "0"
If request the same 1G naked pod 5 times (letting it complete) the fifth never runs:
pods "quota-test-296384742" is forbidden: exceeded quota: quota-test, requested: requests.memory=1Gi, used: requests.memory=4402341k, limited: requests.memory=4.1Gi
And now i see:
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 1G
used:
cpu: "0"
memory: "4402341478"
Any idea what could be going on? Which openshift/kubernetes controller is responsible for freeing resources when pods complete? Any ideas where to start looking?

Can we add ephemeral-storage in LimitRange?

in LimitRange in k8s we can simply limit the ram and the cpu and can we do that for ephemeral-storage as wella?
To set default requests and limits on ephemeral storage for each container in mytest namespace:
apiVersion: v1
kind: LimitRange
metadata:
name: storage-limit
namespace: mytest
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 1Gi
type: Container
To change scope to Pod simply change to type: Pod

Kubernetes HPA on AKS is failing with error 'missing request for cpu'

I am trying to setup HPA for my AKS cluster. Following is the Kubernetes manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: XXXXXX\tools\kompose.exe
convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: loginservicedapr
name: loginservicedapr
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: loginservicedapr
strategy: {}
template:
metadata:
annotations:
kompose.cmd: XXXXXX\kompose.exe
convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: loginservicedapr
spec:
containers:
image: XXXXXXX.azurecr.io/loginservicedapr:latest
imagePullPolicy: ""
name: loginservicedapr
resources:
requests:
cpu: 250m
limits:
cpu: 500m
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: XXXXXXXXXX\kompose.exe
convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: loginservicedapr
name: loginservicedapr
spec:
type: LoadBalancer
ports:
- name: "5016"
port: 5016
targetPort: 80
selector:
io.kompose.service: loginservicedapr
status:
loadBalancer: {}
Following is my HPA yaml file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: loginservicedapr-hpa
spec:
maxReplicas: 10 # define max replica count
minReplicas: 3 # define min replica count
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: loginservicedapr
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
name: cpu
target:
type: Utilization
averageUtilization: 50
But when HPA is failing with the error 'FailedGetResourceMetric' - 'missing request for CPU'.
I have also installed metrics-server (though not sure whether that was required or not) using the following statement:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
But still I am getting the following output when I do 'kubectl describe hpa':
Name: loginservicedapr-hpa
Namespace: default
Labels: fluxcd.io/sync-gc-mark=sha256.Y6dHhIOs-hNYbDmJ25Ijw1YsJ_8f0PH3Vlruj5rfbFk
Annotations: fluxcd.io/sync-checksum: d5c0d9eda6db0c40f1e5e23e1356d0268dbccc8f
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{"fluxcd.io/sync-checksum":"d5c0d9eda6db0c40f1e5...
CreationTimestamp: Wed, 08 Jul 2020 17:19:47 +0530
Reference: Deployment/loginservicedapr
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 3
Max replicas: 10
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for cpu
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 33m (x1234 over 6h3m) horizontal-pod-autoscaler Invalid metrics (1 invalid out of 1), last error was: failed to get cpu utilization: missing request for cpu
Warning FailedGetResourceMetric 3m11s (x1340 over 6h3m) horizontal-pod-autoscaler missing request for cpu
I have 2 more services that I have deployed along with 'loginservicedapr'. But I have not written HPA for those services. But I have included resource limits for those services as well in their YAML files. How to make this HPA work?
resources appears twice in your pod spec.
resources: # once here
requests:
cpu: 250m
limits:
cpu: 500m
ports:
- containerPort: 80
resources: {} # another here, clearing it
I was able to resolve the issue by changing the following in my kubernetes manifest file from this:
resources:
requests:
cpu: 250m
limits:
cpu: 500m
to the following:
resources:
requests:
cpu: "250m"
limits:
cpu: "500m"
HPA worked after that. Following is the GitHub link which gave the solution:
https://github.com/kubernetes-sigs/metrics-server/issues/237
But I did not add any Internal IP address command or anything else.
This is typically related to the metrics server.
Make sure you are not seeing anything unusual about the metrics server installation:
# This should show you metrics (they come from the metrics server)
$ kubectl top pods
$ kubectl top nodes
or check the logs:
$ kubectl logs <metrics-server-pod>
Also, check your kube-controller-manager logs for HPA events related entries.
Furthermore, if you'd like to explore more on whether your pods have missing requests/limits you can simply see the full output of your running pod managed by the HPA:
$ kubectl get pod <pod-name> -o=yaml
Some other people have had luck deleting and renaming the HPA too.

Resource Quota applied before LimitRanger in Kubernetes for Pod without specified limits

While using Kubernetes v1.16.8 both the ResourceQuota and LimitRanger are enabled by default and I did not have to add them in my admission plugin in kube-apiserver.
In my case, I use the following LimitRanger
apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
and it adds the default limit for memory usage in a new Pod without specified limits, as expected.
The Pod's definition is as simple as possible:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod-ctr
image: redis
When I get the created pod described it has acquired the value for limit from the LimitRanger.
Everything is fine!
The problem occurs when i try to enforce a resourcequota for the namespace.
The ResourceQuota looks like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
When I delete and recreate the pod it will not be created.
The resourcequota will result in the following error:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
In other words, the resourcequota is applied before LimitRanger so it does not let me create pods without a specified limit.
Is there a way to enforce LimitRanger first and then the ResourceQuota?
How do you apply them to your namespaces?
I would like to have developers that do not specify limits in the pod definition to be able to acquire the defaults while enforcing the resource quota as well.
TL;DR:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
You didn't set a default limit for CPU, according to ResourceQuota Docs:
If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation.
This is why the pod is not being created. Add a cpu-limit.yaml:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
The limitRanger injects the defaults at container runtime, and yes, it injects the default values prior to the ResourceQuota validation.
Other minor issue that I found, is that not all your yamls contains the namespace: test line under metadata, that's important to assign the resources to the right namespace, I fixed it on the example below.
Reproduction:
Created namespace, applied first the mem-limit and quota, as you mentioned:
$ kubectl create namespace test
namespace/test created
$ cat mem-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: test
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
$ kubectl apply -f mem-limit.yaml
limitrange/mem-limit-range created
$ kubectl apply -f quota.yaml
resourcequota/mem-cpu-demo created
$ kubectl describe resourcequota -n test
Name: mem-cpu-demo
Namespace: test
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
$ kubectl describe limits -n test
Name: mem-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Now if I try to create the pod:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod-ctr
image: redis
$ kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Same error you faced, because there is no default limits for CPU set. We'll create and apply it:
$ cat cpu-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
$ kubectl apply -f cpu-limit.yaml
limitrange/cpu-limit-range created
$ kubectl describe limits cpu-limit-range -n test
Name: cpu-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
Now with the cpu limitRange in action, let's create the pod and inspect it:
$ kubectl apply -f pod.yaml
pod/test-pod created
$ kubectl describe pod test-pod -n test
Name: test-pod
Namespace: test
Status: Running
...{{Suppressed output}}...
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 500m
memory: 256Mi
Our pod was created with the enforced limitRange.
If you have any question let me know in the comments.
The error clearly defines how you are supposed to handle the issue.
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Your LimitRange object defines default memory, but not CPU. Your quota restricts both memory and CPU. So you must request for CPU and memory in your Pod manifest. The LimitRange takes care of your default memory, but there is no default CPU request. So in that case, either you must add CPU request in Pod manifest or add default CPU request in your LimitRange.

Kubernetes -> Memory consumption per namespace

Is there a way to get the memory consumption per namespace on Kubernetes?
On high level we can get this from kubectl
$ kubectl describe resourcequota -n my-namespace
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 12 48
limits.memory 1024M 120Gi
requests.cpu 250m 24
requests.memory 512M 60Gi
Note : will work only if your create resourcequota.
It's possible creating a resourcequota object like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
However there are some pre requisites in order to check the pods consumption:
Every Container must have a memory request, memory limit, cpu
request, and cpu limit.
The memory request total for all Containers must not exceed 1 GiB.
The memory limit total for all Containers must not exceed 2 GiB.
The CPU request total for all Containers must not exceed 1 cpu.
The CPU limit total for all Containers must not exceed 2 cpu.
Pod example template
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
To check the resource consumption use the following command:
kubectl --context <cluster_context> describe resourcequota -n my-namespace
Source:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/