I created a simple pod with resource limits. The expectation is that the pod gets evicted when it uses more memory than the limit specified. To test this, how do I artificially fill the pod memory? I can stress CPU with dd if=/dev/zero of=/dev/null, but not memory. Can someone help me with this please? I tried with stress utility, but no luck.
apiVersion: v1
kind: Pod
metadata:
name: nginx # Name of our pod
labels:
env: test
spec:
containers:
- name: nginx
image: nginx:1.7.1 # Image version
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi
Related
when creating a Pod with one container. The Container has a memory request of 200 MiB and a memory limit of 400 MiB. Look at configuration file for the creating Pod:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "200Mi"
limits:
memory: "400Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1]
Above yaml not working as intended, The args section in the configuration file not providing arguments for the Container when it starting.
Tried to create kubernetes pod memory limits & requests & seems fail to create.
Just upgraded my cluster from OpenShift 4.8 to 4.9 and now, after receiving resource (request/limits) for scheduled pods, the resources are never released upon completion. For example, i have a 4.1G memory ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 4.1G
used:
cpu: "0"
memory: "0"
If request the same 1G naked pod 5 times (letting it complete) the fifth never runs:
pods "quota-test-296384742" is forbidden: exceeded quota: quota-test, requested: requests.memory=1Gi, used: requests.memory=4402341k, limited: requests.memory=4.1Gi
And now i see:
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2023-01-16T15:04:52Z"
name: quota-test
namespace: quota-test
resourceVersion: "295372"
uid: 6b686072-27f8-463f-a43a-d497ce35bc70
spec:
hard:
cpu: "1"
memory: 1G
status:
hard:
cpu: "1"
memory: 1G
used:
cpu: "0"
memory: "4402341478"
Any idea what could be going on? Which openshift/kubernetes controller is responsible for freeing resources when pods complete? Any ideas where to start looking?
I have a project to modify the scheduling policy, I have deployed a large number of pods at the same time, but it seems not scheduled as expected. I think kube-scheduler should cache the resource usage of nodes, so it needs to be deployed in two times.
Pod yaml is as follows, I run multiple pods through a shell loop implementation
apiVersion: v1
kind: Pod
metadata:
name: ${POD_NAME}
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: ibmcom/pause:3.1
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
I want to know the interval of refreshing the cache of kube-scheduler for deployment
I really appreciate any help with this
In openshift, how can I enlarge memory usage for an existing pod from 2GB to 16GB? As currently I always get run out of memory.
You can change the OOM limitation as "1." process, and lower OOM priority through "2." process.
Check if "resources.limits.memory" of your pod is configured sufficient size or not.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi"
limits:
memory: "16Gi" <--- If this memory usage is reached by your application, triggered the Out of Memory event.
:
Configure the same size with "resources.requests.memory" and "resources.limits.memory" for lowest priority of the OOM.
Refer Quality of The services for more details.
// If limits and optionally requests are set (not equal to 0) for all resources and they are equal,
// then the container is classified as Guaranteed.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi" <--- set the same size for memory
limits:
memory: "2Gi" <--- in requests and limits sections
:
Add this section to your deploymentConfig file.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
and if the problem persists then, I would suggest look for HPA(Horizontal pod autoscaler) which will increase pods based on cpu and memory utilization so that your application pod will never get killed. Check out this link for more info
https://docs.openshift.com/container-platform/3.11/dev_guide/pod_autoscaling.html
The most OOM Killed problem occurs for Java application so, setting env you can limit the application memory usage(which normally caused by heap memory usage) so you can limit those by just setting env veriable in your deploymentConfig under spec section.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
env:
- name: JVM_OPTS
value: "-Xms2048M -Xmx4048M"
or you could use this
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
By using any of these env variable your application will respect the memory limit set on pod level/container level(means it will not go beyond it's limit and will do garbage cleanup as soon as hitting memory limit)
I hope this will solve your problem.
I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.