Just upgraded my cluster from OpenShift 4.8 to 4.9 and now, after receiving resource (request/limits) for scheduled pods, the resources are never released upon completion. For example, i have a 4.1G memory ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 4.1G
used:
cpu: "0"
memory: "0"
If request the same 1G naked pod 5 times (letting it complete) the fifth never runs:
pods "quota-test-296384742" is forbidden: exceeded quota: quota-test, requested: requests.memory=1Gi, used: requests.memory=4402341k, limited: requests.memory=4.1Gi
And now i see:
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 1G
used:
cpu: "0"
memory: "4402341478"
Any idea what could be going on? Which openshift/kubernetes controller is responsible for freeing resources when pods complete? Any ideas where to start looking?
Related
in LimitRange in k8s we can simply limit the ram and the cpu and can we do that for ephemeral-storage as wella?
To set default requests and limits on ephemeral storage for each container in mytest namespace:
apiVersion: v1
kind: LimitRange
metadata:
name: storage-limit
namespace: mytest
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 1Gi
type: Container
To change scope to Pod simply change to type: Pod
before start, i'm running kubernetes on mac.
minikube: 1.17.0
metrics-server: 1.8+
vpa: vpa-release-0.8
my issue is vpa doesn't scale up my pod just keep recreating pods. i followed gke vpa example. i set resource requests of deployment cpu: 100m, memory: 50mi. and deploy vpa. it gave me recommendation. updatemode is Auto as well. but it keep recreating pod, doesn't change resource requests when i checked the pod what recreated by kubectl describe pod podname.
enter image description here
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app-deployment
updatePolicy:
updateMode: "Auto"
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: my-container
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 50Mi
command: ["/bin/sh"]
args: ["-c", "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"]
Status:
Conditions:
Last Transition Time: 2021-02-03T03:13:38Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: my-container
Lower Bound:
Cpu: 25m
Memory: 262144k
Target:
Cpu: 548m
Memory: 262144k
Uncapped Target:
Cpu: 548m
Memory: 262144k
Upper Bound:
Cpu: 100G
Memory: 100T
Events: <none>
i tried with kind as well. but it recreate pods with new resource request, but never run keep pending because the node's resource is not enough. and i think the reason why vpa doesn't work properly is minikube or me didn't make multiple node. you think is that relative?
I'm having trouble in creating the pod using ResourceQuota and LimitRange.
The ResourceQuota has limit cpu=2,memory=2Gi & requests cpu=1,memory=1Gi defined for CPU & memory
The LimitRange has default requests and default limits, both having cpu=1,memory=1Gi which is within what is defined in the ResourceQuota .
While creating the pod using only limits(cpu=2,memory=2Gi) without requests(cpu,memory), it is failing with
forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
but as per the default request defined in LimitRange it is cpu=1,memory=1Gi not sure from where it is taking requests.cpu=2,requests.memory=2Gi
As I understand while creating the pod if resource requests is not mentioned, it should take it from LimitRange default requests which is within the range, not sure why it is failing.
please help here
cloud_user#master-node:~$ k describe limitrange default-limitrange
Name: default-limitrange
Namespace: default
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 1Gi 1Gi -
Container cpu - - 1 1 -
cloud_user#master-node:~$ k describe resourcequota compute-resources
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
pods 0 2
requests.cpu 0 1
requests.memory 0 1Gi
cloud_user#master-node:~$ k run nginx --image=nginx --restart=Never --limits=cpu=2,memory=2Gi
Error from server (Forbidden): pods "nginx" is forbidden: exceeded quota: compute-resources, requested: requests.cpu=2,requests.memory=2Gi, used: requests.cpu=0,requests.memory=0, limited: requests.cpu=1,requests.memory=1Gi
Here I'm adding yaml file for LimitRange, ResourceQuota
apiVersion: v1
kind: LimitRange
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"LimitRange","metadata":{"annotations":{},"name":"default-limitrange","namespace":"default"},"spec":{"limits":[{"defaultRequest":{"cpu":"1","memory":"1Gi"},"type":"Container"}]}}
creationTimestamp: "2020-03-28T08:05:40Z"
name: default-limitrange
namespace: default
resourceVersion: "4966600"
selfLink: /api/v1/namespaces/default/limitranges/default-limitrange
uid: 3261f4d9-6339-478d-939c-395010b20aad
spec:
limits:
- default:
cpu: "1"
memory: 1Gi
defaultRequest:
cpu: "1"
memory: 1Gi
type: Container
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2020-03-28T07:40:03Z"
name: compute-resources
namespace: default
resourceVersion: "4967263"
selfLink: /api/v1/namespaces/default/resourcequotas/compute-resources
uid: 8a94a396-0774-4b62-8140-5a5f463935ed
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
status:
hard:
limits.cpu: "2"
limits.memory: 2Gi
pods: "2"
requests.cpu: "1"
requests.memory: 1Gi
used:
limits.cpu: "0"
limits.memory: "0"
pods: "0"
requests.cpu: "0"
requests.memory: "0"
This is documented here.If you specify a container’s limit, but not its request the container is not assigned the default memory request as per the limit range, rather the container’s memory request is set to match its memory limit specified while creating the pod. This is the reason why
requests.cpu=2,requests.memory=2Gi is being set which matches with the limit specified while creating the pod.
Is there a way to get the memory consumption per namespace on Kubernetes?
On high level we can get this from kubectl
$ kubectl describe resourcequota -n my-namespace
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 12 48
limits.memory 1024M 120Gi
requests.cpu 250m 24
requests.memory 512M 60Gi
Note : will work only if your create resourcequota.
It's possible creating a resourcequota object like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
However there are some pre requisites in order to check the pods consumption:
Every Container must have a memory request, memory limit, cpu
request, and cpu limit.
The memory request total for all Containers must not exceed 1 GiB.
The memory limit total for all Containers must not exceed 2 GiB.
The CPU request total for all Containers must not exceed 1 cpu.
The CPU limit total for all Containers must not exceed 2 cpu.
Pod example template
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
To check the resource consumption use the following command:
kubectl --context <cluster_context> describe resourcequota -n my-namespace
Source:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.