Kubernetes Limit Ranges Override - kubernetes

Let's say I set the following Limit Ranges to namespace X:
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range
spec:
limits:
- default:
memory: 1Gi
cpu: 0.5
defaultRequest:
memory: 256Mi
cpu: 0.2
type: Container
These limits are sufficient for most pods in namespace X. Some pods need more resources but a pod requesting more than default.memory, default.cpu will be rejected.
My question is, is there any way (in manifest or otherwise) to override these limits such that the pod can request more than the limit set to the namespace? I know it kinds beats the purpose of Limit Ranges but I'm still wondering if there's a way to do it.

In your example, you do not limit your memory/cpu to a minimum/maximum of memory/cpu. You only set "defaults" to every Pod which is created. With your given LimitRange, you can still override custom Limits/Requests in the Deployment of your Pod.
If you would like to set a minimum/maximum you have to add something like this to your LimitRange:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
min:
cpu: "200m"
type: Container
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/#create-a-limitrange-and-a-pod

Related

Is there flag to specify memory limit for kubernetes pods using kubectl run?

I'm trying to create a pod and pass in memory limit without creating a yaml file. Or is there a way to modify the memory limit on a pod which is already running?
I can't really think of a way of how to enforce a limit of memory from kubectl run but it can be enforced by applying the resource below.
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container

How often does kube-Scheduler refresh node resource data

I have a project to modify the scheduling policy, I have deployed a large number of pods at the same time, but it seems not scheduled as expected. I think kube-scheduler should cache the resource usage of nodes, so it needs to be deployed in two times.
Pod yaml is as follows, I run multiple pods through a shell loop implementation
apiVersion: v1
kind: Pod
metadata:
name: ${POD_NAME}
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: ibmcom/pause:3.1
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "2Gi"
cpu: "2"
I want to know the interval of refreshing the cache of kube-scheduler for deployment
I really appreciate any help with this

Can we add ephemeral-storage in LimitRange?

in LimitRange in k8s we can simply limit the ram and the cpu and can we do that for ephemeral-storage as wella?
To set default requests and limits on ephemeral storage for each container in mytest namespace:
apiVersion: v1
kind: LimitRange
metadata:
name: storage-limit
namespace: mytest
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 1Gi
type: Container
To change scope to Pod simply change to type: Pod

Kubernetes -> Memory consumption per namespace

Is there a way to get the memory consumption per namespace on Kubernetes?
On high level we can get this from kubectl
$ kubectl describe resourcequota -n my-namespace
Name: compute-resources
Namespace: default
Resource Used Hard
-------- ---- ----
limits.cpu 12 48
limits.memory 1024M 120Gi
requests.cpu 250m 24
requests.memory 512M 60Gi
Note : will work only if your create resourcequota.
It's possible creating a resourcequota object like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
However there are some pre requisites in order to check the pods consumption:
Every Container must have a memory request, memory limit, cpu
request, and cpu limit.
The memory request total for all Containers must not exceed 1 GiB.
The memory limit total for all Containers must not exceed 2 GiB.
The CPU request total for all Containers must not exceed 1 cpu.
The CPU limit total for all Containers must not exceed 2 cpu.
Pod example template
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
To check the resource consumption use the following command:
kubectl --context <cluster_context> describe resourcequota -n my-namespace
Source:
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/

Difference between "cpu" and "requests.cpu"

I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.