How often does kube-Scheduler refresh node resource data - kubernetes

I have a project to modify the scheduling policy, I have deployed a large number of pods at the same time, but it seems not scheduled as expected. I think kube-scheduler should cache the resource usage of nodes, so it needs to be deployed in two times.
Pod yaml is as follows, I run multiple pods through a shell loop implementation
apiVersion: v1
kind: Pod
metadata:
name: ${POD_NAME}
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: ibmcom/pause:3.1
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
I want to know the interval of refreshing the cache of kube-scheduler for deployment
I really appreciate any help with this

Related

How to specify the memory request and limit in kubernetes pod

when creating a Pod with one container. The Container has a memory request of 200 MiB and a memory limit of 400 MiB. Look at configuration file for the creating Pod:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "200Mi"
limits:
memory: "400Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1]
Above yaml not working as intended, The args section in the configuration file not providing arguments for the Container when it starting.
Tried to create kubernetes pod memory limits & requests & seems fail to create.

CPU and Memory Stress in Kubernetes

I created a simple pod with resource limits. The expectation is that the pod gets evicted when it uses more memory than the limit specified. To test this, how do I artificially fill the pod memory? I can stress CPU with dd if=/dev/zero of=/dev/null, but not memory. Can someone help me with this please? I tried with stress utility, but no luck.
apiVersion: v1
kind: Pod
metadata:
name: nginx # Name of our pod
labels:
env: test
spec:
containers:
- name: nginx
image: nginx:1.7.1 # Image version
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi

Kubernetes, minikube and vpa: vpa doesn't scale up to target

before start, i'm running kubernetes on mac.
minikube: 1.17.0
metrics-server: 1.8+
vpa: vpa-release-0.8
my issue is vpa doesn't scale up my pod just keep recreating pods. i followed gke vpa example. i set resource requests of deployment cpu: 100m, memory: 50mi. and deploy vpa. it gave me recommendation. updatemode is Auto as well. but it keep recreating pod, doesn't change resource requests when i checked the pod what recreated by kubectl describe pod podname.
enter image description here
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app-deployment
updatePolicy:
updateMode: "Auto"
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-app-deployment
template:
metadata:
labels:
app: my-app-deployment
spec:
containers:
- name: my-container
image: k8s.gcr.io/ubuntu-slim:0.1
resources:
requests:
cpu: 100m
memory: 50Mi
command: ["/bin/sh"]
args: ["-c", "while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done"]
Status:
Conditions:
Last Transition Time: 2021-02-03T03:13:38Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: my-container
Lower Bound:
Cpu: 25m
Memory: 262144k
Target:
Cpu: 548m
Memory: 262144k
Uncapped Target:
Cpu: 548m
Memory: 262144k
Upper Bound:
Cpu: 100G
Memory: 100T
Events: <none>
i tried with kind as well. but it recreate pods with new resource request, but never run keep pending because the node's resource is not enough. and i think the reason why vpa doesn't work properly is minikube or me didn't make multiple node. you think is that relative?

How to enlarge memory limit for an existing pod

In openshift, how can I enlarge memory usage for an existing pod from 2GB to 16GB? As currently I always get run out of memory.
You can change the OOM limitation as "1." process, and lower OOM priority through "2." process.
Check if "resources.limits.memory" of your pod is configured sufficient size or not.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi"
limits:
memory: "16Gi" <--- If this memory usage is reached by your application, triggered the Out of Memory event.
:
Configure the same size with "resources.requests.memory" and "resources.limits.memory" for lowest priority of the OOM.
Refer Quality of The services for more details.
// If limits and optionally requests are set (not equal to 0) for all resources and they are equal,
// then the container is classified as Guaranteed.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
resources:
requests:
memory: "2Gi" <--- set the same size for memory
limits:
memory: "2Gi" <--- in requests and limits sections
:
Add this section to your deploymentConfig file.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
and if the problem persists then, I would suggest look for HPA(Horizontal pod autoscaler) which will increase pods based on cpu and memory utilization so that your application pod will never get killed. Check out this link for more info
https://docs.openshift.com/container-platform/3.11/dev_guide/pod_autoscaling.html
The most OOM Killed problem occurs for Java application so, setting env you can limit the application memory usage(which normally caused by heap memory usage) so you can limit those by just setting env veriable in your deploymentConfig under spec section.
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: Never
resources:
limits:
memory: "16Gi"
requests:
memory: "2Gi"
env:
- name: JVM_OPTS
value: "-Xms2048M -Xmx4048M"
or you could use this
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1
By using any of these env variable your application will respect the memory limit set on pod level/container level(means it will not go beyond it's limit and will do garbage cleanup as soon as hitting memory limit)
I hope this will solve your problem.

Difference between "cpu" and "requests.cpu"

I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.