How to specify the memory request and limit in kubernetes pod - kubernetes

when creating a Pod with one container. The Container has a memory request of 200 MiB and a memory limit of 400 MiB. Look at configuration file for the creating Pod:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "200Mi"
limits:
memory: "400Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1]
Above yaml not working as intended, The args section in the configuration file not providing arguments for the Container when it starting.
Tried to create kubernetes pod memory limits & requests & seems fail to create.

Related

CPU and Memory Stress in Kubernetes

I created a simple pod with resource limits. The expectation is that the pod gets evicted when it uses more memory than the limit specified. To test this, how do I artificially fill the pod memory? I can stress CPU with dd if=/dev/zero of=/dev/null, but not memory. Can someone help me with this please? I tried with stress utility, but no luck.
apiVersion: v1
kind: Pod
metadata:
name: nginx # Name of our pod
labels:
env: test
spec:
containers:
- name: nginx
image: nginx:1.7.1 # Image version
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi

How often does kube-Scheduler refresh node resource data

I have a project to modify the scheduling policy, I have deployed a large number of pods at the same time, but it seems not scheduled as expected. I think kube-scheduler should cache the resource usage of nodes, so it needs to be deployed in two times.
Pod yaml is as follows, I run multiple pods through a shell loop implementation
apiVersion: v1
kind: Pod
metadata:
name: ${POD_NAME}
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: ibmcom/pause:3.1
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
I want to know the interval of refreshing the cache of kube-scheduler for deployment
I really appreciate any help with this

Kubernetes pod not starting up

I would like to create a kubernetes Pod on a specific node pool (in AKS, k8s v1.18.19) that has the (only) taint for=devs:NoSchedule and the (only) label for: devs. The Pod should have at least 4 cpu cores and 12gb of memory available. The node pool has size Standard_B12ms (so 12 vCPU and 48gb RAM) and the single node on it has version AKSUbuntu-1804gen2-2021.07.17. The node has status "ready".
When I start the Pod with kubectl apply -f mypod.yaml the pod is created but the status is stuck in ContainerCreating. When I reduce the resource requirements to 1 vCPU and 2gb memory it starts fine, so it seems that 4 vCPU and 12gb memory is too large but I don't get why.
kind: Pod
apiVersion: v1
metadata:
name: user-ubuntu
labels:
for: devs
spec:
containers:
- name: user-ubuntu
image: ubuntu:latest
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: "4"
memory: 12G
limits:
cpu: "6"
memory: 20G
volumeMounts:
- mountPath: "/mnt/azure"
name: volume
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc-user-default
tolerations:
- key: "for"
operator: "Equal"
value: "devs"
effect: "NoSchedule"
nodeSelector:
for: devs

Kubernetes, minikube and vpa: vpa doesn't scale up to target

before start, i'm running kubernetes on mac.
minikube: 1.17.0
metrics-server: 1.8+
vpa: vpa-release-0.8
my issue is vpa doesn't scale up my pod just keep recreating pods. i followed gke vpa example. i set resource requests of deployment cpu: 100m, memory: 50mi. and deploy vpa. it gave me recommendation. updatemode is Auto as well. but it keep recreating pod, doesn't change resource requests when i checked the pod what recreated by kubectl describe pod podname.
enter image description here
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app-deployment
updatePolicy:
updateMode: "Auto"
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: my-container
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 50Mi
command: ["/bin/sh"]
args: ["-c", "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"]
Status:
Conditions:
Last Transition Time: 2021-02-03T03:13:38Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: my-container
Lower Bound:
Cpu: 25m
Memory: 262144k
Target:
Cpu: 548m
Memory: 262144k
Uncapped Target:
Cpu: 548m
Memory: 262144k
Upper Bound:
Cpu: 100G
Memory: 100T
Events: <none>
i tried with kind as well. but it recreate pods with new resource request, but never run keep pending because the node's resource is not enough. and i think the reason why vpa doesn't work properly is minikube or me didn't make multiple node. you think is that relative?

How to enlarge memory limit for an existing pod

In openshift, how can I enlarge memory usage for an existing pod from 2GB to 16GB? As currently I always get run out of memory.
You can change the OOM limitation as "1." process, and lower OOM priority through "2." process.
Check if "resources.limits.memory" of your pod is configured sufficient size or not.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi"
limits:
memory: "16Gi" <--- If this memory usage is reached by your application, triggered the Out of Memory event.
:
Configure the same size with "resources.requests.memory" and "resources.limits.memory" for lowest priority of the OOM.
Refer Quality of The services for more details.
// If limits and optionally requests are set (not equal to 0) for all resources and they are equal,
// then the container is classified as Guaranteed.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi" <--- set the same size for memory
limits:
memory: "2Gi" <--- in requests and limits sections
:
Add this section to your deploymentConfig file.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
and if the problem persists then, I would suggest look for HPA(Horizontal pod autoscaler) which will increase pods based on cpu and memory utilization so that your application pod will never get killed. Check out this link for more info
https://docs.openshift.com/container-platform/3.11/dev_guide/pod_autoscaling.html
The most OOM Killed problem occurs for Java application so, setting env you can limit the application memory usage(which normally caused by heap memory usage) so you can limit those by just setting env veriable in your deploymentConfig under spec section.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
env:
- name: JVM_OPTS
value: "-Xms2048M -Xmx4048M"
or you could use this
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
By using any of these env variable your application will respect the memory limit set on pod level/container level(means it will not go beyond it's limit and will do garbage cleanup as soon as hitting memory limit)
I hope this will solve your problem.