How can I apply limit range to all namespaces in kubernetes - kubernetes

How can I apply this file to all namespaces:
apiVersion: v1
kind: LimitRange
metadata:
name: resource-limits
spec:
limits:
-
type: Pod
max:
cpu: 1000m
memory: 1Gi
min:
cpu: 500m
memory: 500Mi
By default it gets applied to namespace I am into. I want to make this setting a common one
How can I do that. Make this a global setting.

This can be very easily done using an admission controller like Kyverno. Kyverno has "generate" capability which can be used to generate any Kubernetes resource based on a trigger (e.g. create namespace)
Here is an example of a Kyverno policy to achieve this.
https://kyverno.io/policies/best-practices/add_ns_quota/add_ns_quota/
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-ns-quota
annotations:
policies.kyverno.io/title: Add Quota
policies.kyverno.io/category: Multi-Tenancy
policies.kyverno.io/subject: ResourceQuota, LimitRange
policies.kyverno.io/description: >-
To better control the number of resources that can be created in a given
Namespace and provide default resource consumption limits for Pods,
ResourceQuota and LimitRange resources are recommended.
This policy will generate ResourceQuota and LimitRange resources when
a new Namespace is created.
spec:
rules:
- name: generate-resourcequota
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: ResourceQuota
name: default-resourcequota
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: '4'
requests.memory: '16Gi'
limits.cpu: '4'
limits.memory: '16Gi'
- name: generate-limitrange
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: LimitRange
name: default-limitrange
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 200m
memory: 256Mi
type: Container

As per my investigation, this is not possible natively in the manifest file, but here I implemented a trick-script to do so using bash, if you are deploying the manifest using kubectl, then use this auxiliary script:
#!/bin/bash
namespaces=$(echo `kubectl get namespaces -o=jsonpath='{range.items[*]} {.metadata.name}{end}'`)
for ns in $namespaces; do kubectl apply -f path-to-manifest-file.yaml --namespace $ns; done
Maybe you'd say why I'm applying this in a loop and not in one line by adding as many --namespace flags as the namespaces we have! Actually, I tried so, but it looks like kubectl command does not consider multiple --namespace when passed via a variable, like follows:
(base)
╰─$ namespace_flags=`kubectl get namespaces -o=jsonpath='{range.items[*]} --namespace {.metadata.name}{end}'`
╰─$ echo $namespace_flags
--namespace default--namespace kube-node-lease--namespace kube-public--namespace kube-system--namespace newrelic
(base)
╰─$ kubectl get pods ${namespace_flags[#]}
Error from server (NotFound): pods " --namespace default --namespace kube-node-lease --namespace kube-public --namespace kube-system --namespace newrelic" not found

After doing a lot of research, I got a better solution, Kyverno which is adopted as CNCF Incubating project maturity level. It can implement cluster level policies which suffices my usecase. Link is here :
https://kyverno.io/

Related

assign memory resources to a running pod?

i would want to know how can i assign memory resources to a running pod ?
i tried kubectl get po foo-7d7dbb4fcd-82xfr -o yaml > pod.yaml
but when i run the command kubectl apply -f pod.yaml
The Pod "foo-7d7dbb4fcd-82xfr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
thanks in advance for your help .
Pod is the minimal kubernetes resources, and it doesn't not support editing as you want to do.
I suggest you to use deployment to run your pod, since it is a "pod manager" where you have a lot of additional features, like pod self-healing, pod liveness/readness etc...
You can define the resources in your deployment file like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
resources:
limits:
cpu: 15m
memory: 100Mi
requests:
cpu: 15m
memory: 100Mi
ports:
- name: http
containerPort: 80
As #KoopaKiller mentioned, you can't update spec.containers.resources field, this is mentioned in Container object spec:
Compute Resources required by this container. Cannot be updated.
Instead you can deploy your Pods using Deployment object. In that case if you change resources config for your Pods, deployment controller will roll out updated versions of your Pods.

Resource Quota applied before LimitRanger in Kubernetes for Pod without specified limits

While using Kubernetes v1.16.8 both the ResourceQuota and LimitRanger are enabled by default and I did not have to add them in my admission plugin in kube-apiserver.
In my case, I use the following LimitRanger
apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
and it adds the default limit for memory usage in a new Pod without specified limits, as expected.
The Pod's definition is as simple as possible:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod-ctr
image: redis
When I get the created pod described it has acquired the value for limit from the LimitRanger.
Everything is fine!
The problem occurs when i try to enforce a resourcequota for the namespace.
The ResourceQuota looks like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
When I delete and recreate the pod it will not be created.
The resourcequota will result in the following error:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
In other words, the resourcequota is applied before LimitRanger so it does not let me create pods without a specified limit.
Is there a way to enforce LimitRanger first and then the ResourceQuota?
How do you apply them to your namespaces?
I would like to have developers that do not specify limits in the pod definition to be able to acquire the defaults while enforcing the resource quota as well.
TL;DR:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
You didn't set a default limit for CPU, according to ResourceQuota Docs:
If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation.
This is why the pod is not being created. Add a cpu-limit.yaml:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
The limitRanger injects the defaults at container runtime, and yes, it injects the default values prior to the ResourceQuota validation.
Other minor issue that I found, is that not all your yamls contains the namespace: test line under metadata, that's important to assign the resources to the right namespace, I fixed it on the example below.
Reproduction:
Created namespace, applied first the mem-limit and quota, as you mentioned:
$ kubectl create namespace test
namespace/test created
$ cat mem-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: test
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
$ kubectl apply -f mem-limit.yaml
limitrange/mem-limit-range created
$ kubectl apply -f quota.yaml
resourcequota/mem-cpu-demo created
$ kubectl describe resourcequota -n test
Name: mem-cpu-demo
Namespace: test
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
$ kubectl describe limits -n test
Name: mem-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Now if I try to create the pod:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod-ctr
image: redis
$ kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Same error you faced, because there is no default limits for CPU set. We'll create and apply it:
$ cat cpu-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
$ kubectl apply -f cpu-limit.yaml
limitrange/cpu-limit-range created
$ kubectl describe limits cpu-limit-range -n test
Name: cpu-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
Now with the cpu limitRange in action, let's create the pod and inspect it:
$ kubectl apply -f pod.yaml
pod/test-pod created
$ kubectl describe pod test-pod -n test
Name: test-pod
Namespace: test
Status: Running
...{{Suppressed output}}...
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 500m
memory: 256Mi
Our pod was created with the enforced limitRange.
If you have any question let me know in the comments.
The error clearly defines how you are supposed to handle the issue.
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Your LimitRange object defines default memory, but not CPU. Your quota restricts both memory and CPU. So you must request for CPU and memory in your Pod manifest. The LimitRange takes care of your default memory, but there is no default CPU request. So in that case, either you must add CPU request in Pod manifest or add default CPU request in your LimitRange.

Is it possible to set resource quotas only for namespaces? [kubernetes]

We have 3 namespaces on a kubernetes cluster
dev-test / build / prod
I want to limit the resource usage for dev-test & build only.
Can I set the resource quotas only for these namespaces without specifying (default-) resource requests & limits on the pod/container level?
If the resource usage on the limited namespaces is low, prod can use the rest completely, and it can grow only to a limited value, so prod resource usage is protected.
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-test
spec:
hard:
cpu: "2"
memory: 8Gi
Is this enough?
Yes, you can set resource limits per namespace using ResourceQuota object:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
From kubernetes documentation.
Yes you can use kubeclt CLI as well to define resource quota per namespace.
Example :
$ kubectl create namespace testquotaspace
namespace/testquotaspace created
$ kubectl create quota testquota -n testquotaspace --hard=cpu=1,memory=8Gi
resourcequota/testquota created
$ kubectl describe namespaces testquotaspace
Name: testquotaspace
Labels: <none>
Annotations: <none>
Status: Active
Resource Quotas
Name: testquota
Resource Used Hard
-------- --- ---
cpu 0 1
memory 0 8Gi
No LimitRange resource.
you can choose to limit the other objects you need as well like PODS/SERVICE/PVC etc ..
Just run help on CLI and will have all details
$ kubectl create quota --help
Create a resourcequota with the specified name, hard limits and optional scopes
Aliases:
quota, resourcequota
Examples:
# Create a new resourcequota named my-quota
kubectl create quota my-quota
--hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
If you do need yamls you can generate as follow and pipe to a file as needed
$ kubectl create quota testquota -n testquotaspace --hard=cpu=1,memory=8Gi --dry-run -o yaml
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: null
name: testquota
namespace: testquotaspace
spec:
hard:
cpu: "1"
memory: 8Gi

Difference between "cpu" and "requests.cpu"

I am trying to create a resource quota for a namespace in kubernetes. While writing the yaml file for the Resource Quota, what should I specify for the cpu requests - "cpu" or "requests.cpu" ? Also, is there any official documentation which specifies the difference between the two? I went through openshift docs which specify that both of these are same and can be used interchangeably.
requests.cpu is used for ResourceQuota which can be applied at the namespace level.
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
where as cpu is applied at pod level.
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "600Mi"
cpu: "400m"
for further details please refer to the below link.
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/
You can use the cpu form if you follow the Kubernetes documentation.
The difference between adding requests or limit before memory or cpu in quota is described here: https://kubernetes.io/docs/concepts/policy/resource-quotas/#requests-vs-limits
The final results is the same but if you use the requests or limit you will have to have each containers in pod having those specified.

How to remove LimitRange from default namespace in kubernetes?

For one of requirements , i created a new pod on my default name space using below yaml file
apiVersion: v1
kind: LimitRange
metadata:
name: mem-min-max-demo-lr1
spec:
limits:
- max:
memory: 5Gi
min:
memory: 900Mi
type: Container
Now i need to remove these LimitRange from default namespace in kubernetes?
You created a LimitRange named mem-min-max-demo-lr1 in the default namespace. To verify run kubectl get LimitRange -n default , then delete kubectl delete LimitRange mem-min-max-demo-lr1 . To further understand this scenario please check this https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/