Difference between "cpu" and "requests.cpu" - kubernetes

I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.

requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.

Related

How can I apply limit range to all namespaces in kubernetes

How can I apply this file to all namespaces:
apiVersion: v1
kind: LimitRange
metadata:
name: resource-limits
spec:
limits:
-
type: Pod
max:
cpu: 1000m
memory: 1Gi
min:
cpu: 500m
memory: 500Mi
By default it gets applied to namespace I am into. I want to make this setting a common one
How can I do that. Make this a global setting.
This can be very easily done using an admission controller like Kyverno. Kyverno has "generate" capability which can be used to generate any Kubernetes resource based on a trigger (e.g. create namespace)
Here is an example of a Kyverno policy to achieve this.
https://kyverno.io/policies/best-practices/add_ns_quota/add_ns_quota/
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-ns-quota
annotations:
policies.kyverno.io/title: Add Quota
policies.kyverno.io/category: Multi-Tenancy
policies.kyverno.io/subject: ResourceQuota, LimitRange
policies.kyverno.io/description: >-
To better control the number of resources that can be created in a given
Namespace and provide default resource consumption limits for Pods,
ResourceQuota and LimitRange resources are recommended.
This policy will generate ResourceQuota and LimitRange resources when
a new Namespace is created.
spec:
rules:
- name: generate-resourcequota
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: ResourceQuota
name: default-resourcequota
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: '4'
requests.memory: '16Gi'
limits.cpu: '4'
limits.memory: '16Gi'
- name: generate-limitrange
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: LimitRange
name: default-limitrange
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 200m
memory: 256Mi
type: Container
As per my investigation, this is not possible natively in the manifest file, but here I implemented a trick-script to do so using bash, if you are deploying the manifest using kubectl, then use this auxiliary script:
#!/bin/bash
namespaces=$(echo `kubectl get namespaces -o=jsonpath='{range.items[*]} {.metadata.name}{end}'`)
for ns in $namespaces; do kubectl apply -f path-to-manifest-file.yaml --namespace $ns; done
Maybe you'd say why I'm applying this in a loop and not in one line by adding as many --namespace flags as the namespaces we have! Actually, I tried so, but it looks like kubectl command does not consider multiple --namespace when passed via a variable, like follows:
(base)
╰─$ namespace_flags=`kubectl get namespaces -o=jsonpath='{range.items[*]} --namespace {.metadata.name}{end}'`
╰─$ echo $namespace_flags
--namespace default--namespace kube-node-lease--namespace kube-public--namespace kube-system--namespace newrelic
(base)
╰─$ kubectl get pods ${namespace_flags[#]}
Error from server (NotFound): pods " --namespace default --namespace kube-node-lease --namespace kube-public --namespace kube-system --namespace newrelic" not found
After doing a lot of research, I got a better solution, Kyverno which is adopted as CNCF Incubating project maturity level. It can implement cluster level policies which suffices my usecase. Link is here :
https://kyverno.io/

Is there flag to specify memory limit for kubernetes pods using kubectl run?

I'm trying to create a pod and pass in memory limit without creating a yaml file. Or is there a way to modify the memory limit on a pod which is already running?
I can't really think of a way of how to enforce a limit of memory from kubectl run but it can be enforced by applying the resource below.
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

How often does kube-Scheduler refresh node resource data

I have a project to modify the scheduling policy, I have deployed a large number of pods at the same time, but it seems not scheduled as expected. I think kube-scheduler should cache the resource usage of nodes, so it needs to be deployed in two times.
Pod yaml is as follows, I run multiple pods through a shell loop implementation
apiVersion: v1
kind: Pod
metadata:
name: ${POD_NAME}
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: ibmcom/pause:3.1
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
I want to know the interval of refreshing the cache of kube-scheduler for deployment
I really appreciate any help with this

Kubernetes Limit Ranges Override

Let's say I set the following Limit Ranges to namespace X:
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range
spec:
limits:
- default:
memory: 1Gi
cpu: 0.5
defaultRequest:
memory: 256Mi
cpu: 0.2
type: Container
These limits are sufficient for most pods in namespace X. Some pods need more resources but a pod requesting more than default.memory, default.cpu will be rejected.
My question is, is there any way (in manifest or otherwise) to override these limits such that the pod can request more than the limit set to the namespace? I know it kinds beats the purpose of Limit Ranges but I'm still wondering if there's a way to do it.
In your example, you do not limit your memory/cpu to a minimum/maximum of memory/cpu. You only set "defaults" to every Pod which is created. With your given LimitRange, you can still override custom Limits/Requests in the Deployment of your Pod.
If you would like to set a minimum/maximum you have to add something like this to your LimitRange:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
min:
cpu: "200m"
type: Container
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/#create-a-limitrange-and-a-pod

Kubernetes -> Memory consumption per namespace

Is there a way to get the memory consumption per namespace on Kubernetes?
On high level we can get this from kubectl
$ kubectl describe resourcequota -n my-namespace
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 12 48
limits.memory 1024M 120Gi
requests.cpu 250m 24
requests.memory 512M 60Gi
Note : will work only if your create resourcequota.
It's possible creating a resourcequota object like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
However there are some pre requisites in order to check the pods consumption:
Every Container must have a memory request, memory limit, cpu
request, and cpu limit.
The memory request total for all Containers must not exceed 1 GiB.
The memory limit total for all Containers must not exceed 2 GiB.
The CPU request total for all Containers must not exceed 1 cpu.
The CPU limit total for all Containers must not exceed 2 cpu.
Pod example template
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
To check the resource consumption use the following command:
kubectl --context <cluster_context> describe resourcequota -n my-namespace
Source:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/