For one of requirements , i created a new pod on my default name space using below yaml file
apiVersion: v1
kind: LimitRange
metadata:
name: mem-min-max-demo-lr1
spec:
limits:
- max:
memory: 5Gi
min:
memory: 900Mi
type: Container
Now i need to remove these LimitRange from default namespace in kubernetes?
You created a LimitRange named mem-min-max-demo-lr1 in the default namespace. To verify run kubectl get LimitRange -n default , then delete kubectl delete LimitRange mem-min-max-demo-lr1 . To further understand this scenario please check this https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/
Related
As can not use RMX mode directly , so I use 3 pvcs with default storage class for 3 pods.
But I can saw they are not working in the pod , or just 1 pvc worked as expected.
The pvc yaml as below , nothing special.I use get pvc then can see all the status are bound,but in the pod the error is "pod has unbound immediate PersistentVolumeClaims".
My understanding is , the pvc should be separate and independent even if use the same storageClass.
The actual is pvc can not bounded in the pod , or only one pod can access the pvc ,the other still have errors.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
How can I apply this file to all namespaces:
apiVersion: v1
kind: LimitRange
metadata:
name: resource-limits
spec:
limits:
-
type: Pod
max:
cpu: 1000m
memory: 1Gi
min:
cpu: 500m
memory: 500Mi
By default it gets applied to namespace I am into. I want to make this setting a common one
How can I do that. Make this a global setting.
This can be very easily done using an admission controller like Kyverno. Kyverno has "generate" capability which can be used to generate any Kubernetes resource based on a trigger (e.g. create namespace)
Here is an example of a Kyverno policy to achieve this.
https://kyverno.io/policies/best-practices/add_ns_quota/add_ns_quota/
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-ns-quota
annotations:
policies.kyverno.io/title: Add Quota
policies.kyverno.io/category: Multi-Tenancy
policies.kyverno.io/subject: ResourceQuota, LimitRange
policies.kyverno.io/description: >-
To better control the number of resources that can be created in a given
Namespace and provide default resource consumption limits for Pods,
ResourceQuota and LimitRange resources are recommended.
This policy will generate ResourceQuota and LimitRange resources when
a new Namespace is created.
spec:
rules:
- name: generate-resourcequota
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: ResourceQuota
name: default-resourcequota
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: '4'
requests.memory: '16Gi'
limits.cpu: '4'
limits.memory: '16Gi'
- name: generate-limitrange
match:
resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: LimitRange
name: default-limitrange
synchronize: true
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
- default:
cpu: 500m
memory: 1Gi
defaultRequest:
cpu: 200m
memory: 256Mi
type: Container
As per my investigation, this is not possible natively in the manifest file, but here I implemented a trick-script to do so using bash, if you are deploying the manifest using kubectl, then use this auxiliary script:
#!/bin/bash
namespaces=$(echo `kubectl get namespaces -o=jsonpath='{range.items[*]} {.metadata.name}{end}'`)
for ns in $namespaces; do kubectl apply -f path-to-manifest-file.yaml --namespace $ns; done
Maybe you'd say why I'm applying this in a loop and not in one line by adding as many --namespace flags as the namespaces we have! Actually, I tried so, but it looks like kubectl command does not consider multiple --namespace when passed via a variable, like follows:
(base)
╰─$ namespace_flags=`kubectl get namespaces -o=jsonpath='{range.items[*]} --namespace {.metadata.name}{end}'`
╰─$ echo $namespace_flags
--namespace default--namespace kube-node-lease--namespace kube-public--namespace kube-system--namespace newrelic
(base)
╰─$ kubectl get pods ${namespace_flags[#]}
Error from server (NotFound): pods " --namespace default --namespace kube-node-lease --namespace kube-public --namespace kube-system --namespace newrelic" not found
After doing a lot of research, I got a better solution, Kyverno which is adopted as CNCF Incubating project maturity level. It can implement cluster level policies which suffices my usecase. Link is here :
https://kyverno.io/
While using Kubernetes v1.16.8 both the ResourceQuota and LimitRanger are enabled by default and I did not have to add them in my admission plugin in kube-apiserver.
In my case, I use the following LimitRanger
apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
and it adds the default limit for memory usage in a new Pod without specified limits, as expected.
The Pod's definition is as simple as possible:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod-ctr
image: redis
When I get the created pod described it has acquired the value for limit from the LimitRanger.
Everything is fine!
The problem occurs when i try to enforce a resourcequota for the namespace.
The ResourceQuota looks like this:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
When I delete and recreate the pod it will not be created.
The resourcequota will result in the following error:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
In other words, the resourcequota is applied before LimitRanger so it does not let me create pods without a specified limit.
Is there a way to enforce LimitRanger first and then the ResourceQuota?
How do you apply them to your namespaces?
I would like to have developers that do not specify limits in the pod definition to be able to acquire the defaults while enforcing the resource quota as well.
TL;DR:
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
You didn't set a default limit for CPU, according to ResourceQuota Docs:
If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation.
This is why the pod is not being created. Add a cpu-limit.yaml:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
The limitRanger injects the defaults at container runtime, and yes, it injects the default values prior to the ResourceQuota validation.
Other minor issue that I found, is that not all your yamls contains the namespace: test line under metadata, that's important to assign the resources to the right namespace, I fixed it on the example below.
Reproduction:
Created namespace, applied first the mem-limit and quota, as you mentioned:
$ kubectl create namespace test
namespace/test created
$ cat mem-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: test
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
$ cat quota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
namespace: test
spec:
hard:
limits.cpu: "2"
limits.memory: 2Gi
$ kubectl apply -f mem-limit.yaml
limitrange/mem-limit-range created
$ kubectl apply -f quota.yaml
resourcequota/mem-cpu-demo created
$ kubectl describe resourcequota -n test
Name: mem-cpu-demo
Namespace: test
Resource Used Hard
-------- ---- ----
limits.cpu 0 2
limits.memory 0 2Gi
$ kubectl describe limits -n test
Name: mem-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory - - 256Mi 512Mi -
Now if I try to create the pod:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: test
spec:
containers:
- name: test-pod-ctr
image: redis
$ kubectl apply -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Same error you faced, because there is no default limits for CPU set. We'll create and apply it:
$ cat cpu-limit.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: test
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
$ kubectl apply -f cpu-limit.yaml
limitrange/cpu-limit-range created
$ kubectl describe limits cpu-limit-range -n test
Name: cpu-limit-range
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
Now with the cpu limitRange in action, let's create the pod and inspect it:
$ kubectl apply -f pod.yaml
pod/test-pod created
$ kubectl describe pod test-pod -n test
Name: test-pod
Namespace: test
Status: Running
...{{Suppressed output}}...
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 500m
memory: 256Mi
Our pod was created with the enforced limitRange.
If you have any question let me know in the comments.
The error clearly defines how you are supposed to handle the issue.
Error from server (Forbidden): error when creating "test-pod.yml": pods "test-pod" is forbidden: failed quota: mem-cpu-demo: must specify limits.cpu
Your LimitRange object defines default memory, but not CPU. Your quota restricts both memory and CPU. So you must request for CPU and memory in your Pod manifest. The LimitRange takes care of your default memory, but there is no default CPU request. So in that case, either you must add CPU request in Pod manifest or add default CPU request in your LimitRange.
We have 3 namespaces on a kubernetes cluster
dev-test / build / prod
I want to limit the resource usage for dev-test & build only.
Can I set the resource quotas only for these namespaces without specifying (default-) resource requests & limits on the pod/container level?
If the resource usage on the limited namespaces is low, prod can use the rest completely, and it can grow only to a limited value, so prod resource usage is protected.
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-test
spec:
hard:
cpu: "2"
memory: 8Gi
Is this enough?
Yes, you can set resource limits per namespace using ResourceQuota object:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
From kubernetes documentation.
Yes you can use kubeclt CLI as well to define resource quota per namespace.
Example :
$ kubectl create namespace testquotaspace
namespace/testquotaspace created
$ kubectl create quota testquota -n testquotaspace --hard=cpu=1,memory=8Gi
resourcequota/testquota created
$ kubectl describe namespaces testquotaspace
Name: testquotaspace
Labels: <none>
Annotations: <none>
Status: Active
Resource Quotas
Name: testquota
Resource Used Hard
-------- --- ---
cpu 0 1
memory 0 8Gi
No LimitRange resource.
you can choose to limit the other objects you need as well like PODS/SERVICE/PVC etc ..
Just run help on CLI and will have all details
$ kubectl create quota --help
Create a resourcequota with the specified name, hard limits and optional scopes
Aliases:
quota, resourcequota
Examples:
# Create a new resourcequota named my-quota
kubectl create quota my-quota
--hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
If you do need yamls you can generate as follow and pipe to a file as needed
$ kubectl create quota testquota -n testquotaspace --hard=cpu=1,memory=8Gi --dry-run -o yaml
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: null
name: testquota
namespace: testquotaspace
spec:
hard:
cpu: "1"
memory: 8Gi
I want to dynamically create PeristentVolumes and mount them into my pod using PVCs. SO, I am following kubernetes Dynamic Provisioning concept. I am creating PersistentVolumeClaim using Kubernetes StorageClasses.
I am creating PVC using StorageClasses like this.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: test
spec:
accessModes:
- ReadWriteMany
storageClassName: test-sc
resources:
requests:
storage: 100M
Now, I want to put restriction on StorageClasses test-sc to limit storage usage. In any case, the sum of storage used by PVCs which are created using StorageClass test-sc across all namespaces should not exceed 150M.
I am able to limit the storage usage of PVCs created using StorageClass test-sc for single namespace as following.
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota-limit-sc
namespace: test
spec:
hard:
secure-maprfs.storageclass.storage.k8s.io/requests.storage: 150Mi
How do I put this limitation on Cluster Level i.e. on Storage Classes ??
This is applicable per namespace only.
You will have to define quotas for all your namespaces