How can my containerized app determine its own current resource utilization, as well as the maximum limit allocated by Kubernetes? Is there an API to get this info from cAdvisor and/or Kubelet?
For example, my container is allowed to use maximum 1 core, and it's currently consuming 800 millicores. In this situation, I want to drop/reject all incoming requests that are marked as "low priority".
-How can I see my resource utilization & limit from within my container?
Note that this assumes auto-scaling is not available, e.g. when cluster resources are exhausted, or our app is not allowed to auto-scale (further).
You can use the Kubernetes Downward Api to fetch the limits and requests. The syntax is:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "cpu_limit"
resourceFieldRef:
containerName: client-container
resource: limits.cpu
divisor: 1m
Related
Is there a way to check history of different kubernetes resource type? It can be an additional plugin.
Use case:
For example currently we have a statefulset on a 5 node cluster:
name: X
replica:3
resources:
memory:
limit: 2Gi
request: 1Gi
currently replica 1 is on node_1, replica 2 is on node_2, replica 3 is on node_3.
I am curious about the state of the resources for any given time.
Let's say I want to check that one month ago what were the resource limits. How many replica we had and on which node were those allocated.
To directly answer your question - you can't do that using oob functionality.
You need any existing monitoring solution for Kubernetes that is capable of exposing metrics you need. What comes to my mind:
kube-state-metrics server + promethues
kube-state-metrics server + metricbeat
For instance kube-state-metrics server exposes such metric like Pod's container resource limit (kube_pod_container_resource_limits), and metricbeat digest that metrics and helps to visualize it.
Collecting Kubernetes state metrics and events
A single instance is deployed to collect Kubernetes metrics. It is
integrated with the kube-state-metrics API to monitor state changes of
objects managed by Kubernetes. This is the section of the config that
defines state_metrics collection.
$HOME/k8s-o11y-workshop/Metricbeat/Metricbeat.yml:
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
# Uncomment this to get k8s events:
- event
period: 10s
host: ${NODE_NAME}
hosts: ["kube-state-metrics:8080"]
I have deployed an app on Kubernetes and would like to test hpa.
With kubectl top nodes command, i noticed that cpu and memory are increased without stressing it.
Does it make sense?
Also while stressing deployment with apache bench, cpu and memory dont be increased enough to pass the target and make a replica.
My Deployment yaml file is so big to provide it. This is one of my containers.
- name: web
image: php_apache:1.0
imagePullPolicy: Always
resources:
requests:
memory: 50Mi
cpu: 80m
limits:
memory: 100Mi
cpu: 120m
volumeMounts:
- name: shared-data
mountPath: /var/www/html
ports:
- containerPort: 80
It consists of 15 containers
I have a VM that contains a cluster with 2 nodes (master,worker).
I would like to stress deployment so that i can see it scale up.
But here I think there is a problem! Without stressing the app, the
CPU/Memory from Pod has passed the target and 2 replicas have been made (without stressing it).
I know that the more Requests i provide to containers the less is that percentage.
But does it make sense the usage of memory/cpu to be increased from the beggining, without stressing it?
I would like, the left part of target(the usage of memory in pods), be at the beggining 0% and as much as I stress it to be increased and create replicas.
But as i'm stressing with apache bench, the value is increased by a maximum of 10%
We can see here the usage of CPU:
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
x-app-55b54b6fc8-7dqjf 76m 765Mi
!!59% is the usage of memory from the pod and is described by Sum of Memory Requests/Memory(usage of memory). In my case 59% = 765Mi/1310Mi
HPA yaml file:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hpa
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 35
With kubectl top nodes command, i noticed that cpu and memory are increased without stressing it. Does it make sense?
Yes, it makes sense. If you will check Google Cloud about Requests and Limits
Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.
But does it make sense the usage of memory/cpu to be increased from the beggining, without stressing it?
Yes as, for example your container www it can start with memory: 50Mi and cpu: 80m but its allowed to increase to memory: 100Mi and cpu: 120m. Also as you mentioned you have 15 containers in total, so depends on their request, limits it can reach more than 35% of your memory.
In HPA documentation - algorithm-details you can find information:
When a targetAverageValue or targetAverageUtilization is specified, the currentMetricValue is computed by taking the average of the given metric across all Pods in the HorizontalPodAutoscaler's scale target. Before checking the tolerance and deciding on the final values, we take pod readiness and missing metrics into consideration, however.
All Pods with a deletion timestamp set (i.e. Pods in the process of being shut down) and all failed Pods are discarded.
If a particular Pod is missing metrics, it is set aside for later; Pods with missing metrics will be used to adjust the final scaling amount.
Not sure about last question:
!!59% is the usage of memory from the pod and is described by Sum of Memory Requests/Memory(usage of memory). In my case 59% = 765Mi/1310Mi
In your HPA you set to create another pod when averageUtilization: will reach 35% of memory. It reached 59% and it created another pod. As HPA target is memory, HPA is not counting CPU at all. Also please keep in mind as this is average it needs about ~1 minute to change values.
For better understanding how HPA is working, please try this walkthrough.
If this was not helpful, please clarify what are you exact asking.
Running Openshift 3.11 with project ResourceQuotas and LimitRanges enforced, I am trying to understand how I can utilise the entire of my project CPU quota based on the "actual current usage" rather than what I have "reserved".
As a simple example, if my question is not clear:
If I have a project with a ResourceQuota of 2 Core CPU
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
limits.cpu: "2"
I have a number of long running containers which are often idle, waiting for requests. So are not actually using much CPU. When requests start appearing I want the effected container to be able to "burst", allowing CPU usage up to the remaining CPU quota available in the project based on what is actually being used (I have no issue with the 100ms CFS resolution).
I need to enforce the maximum the project can have in total, hence the limits.cpu ResourceQuota. But, I must therefore also provide the limits.cpu for each container I create (explicitly or via LimitRange defaults) e.g:
...
spec:
containers:
...
resources:
limits:
cpu: "2"
requests:
cpu: 200m
This however will only work with the first container I create - the second container with the same settings will exceed the project quotas limits.cpu. But the container is just idle doing almost nothing after it's initial startup sequence.
Is it not possible in my scenario above to have it deallocate 200m from the quota for each container based on the request.cpu and burst up to 1800m? ( 1600m of 2000m quota unused + initial 200m requested )
I have read through the following, the overcommit link seemed promising, but I am still stuck.
https://docs.openshift.com/container-platform/3.11/admin_guide/quota.html
https://docs.openshift.com/container-platform/3.11/admin_guide/limits.html
https://docs.openshift.com/container-platform/3.11/admin_guide/overcommit.html
Is what I am trying to do possible?
I am trying to understand how I can utilise the entire of my project CPU quota based on the "actual current usage" rather than what I have "reserved"
You can't. If your quota is on limit.cpu then the cluster admin doesn't want you to burst higher than that value.
If you can get your cluster admin to set your quota differently, to have a low request.cpu quota, and a higher limit.cpu quota, you might be able to size your containers as you'd like.
The other option is to use low limits, and a Horizontal Pod Autoscaler to scale up the number of pods for a specific service that is getting a burst in traffic.
Kubernetes on Google Cloud Platform configures a default CPU request and limit.
I make use of deamonsets and deamonset pods should use as much CPU as possible.
Manually increasing the upper limit is possible but the upper bound must be reconfigured in case of new nodes and the upper bound must be set much lower than what is available on the node in order to have rolling updates allowing pods scheduling.
This requires a lot of manual actions and some resources are just not used most of the time. Is there a way to completely remove the default CPU limit so that pods can use all available CPUs?
GKE, by default, creates a LimitRange object named limits in the default namespace looking like this:
apiVersion: v1
kind: LimitRange
metadata:
name: limits
spec:
limits:
- defaultRequest:
cpu: 100m
type: Container
So, if you want to change this, you can either edit it:
kubectl edit limitrange limits
Or you can delete it altogether:
kubectl delete limitrange limits
Note: the policies in the LimitRange objects are enforced by the LimitRanger admission controller which is enabled by default in GKE.
Limit Range is a policy to constrain resource by Pod or Container in a namespace.
A limit range, defined by a LimitRange object, provides constraints
that can:
Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
Enforce minimum and maximum storage
request per PersistentVolumeClaim in a namespace.
Enforce a ratio between request and limit for a resource in a namespace.
Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
You need to find the LimitRange resource of your namespace and remove the spec.limits.default.cpu and spec.limits.defaultRequest.cpu that are defined (or simply delete the LimitRange to remove all constraints).
The resource limitation can be configured in 2 ways.
At object level:
kubectl edit limitrange limits
This object is created by default and the value is 100m (1/10 of CPU) and when a pod reach that limit, it's simply killed.
At manifest level:
Using statefulSet, DaemonSet, etc, through a yaml file and configured on
spec.containers.resources
it's look like this:
spec:
containers:
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200 Mi
As mentioned you can modify the configuration or simply delete them to remove the limitations.
However, they have some reasons why these limitations has been implemented.
I found a video from a Googler talking about it, take a look! [1]
On top of the Limit Range mentioned by Eduardo Baitello, you should also look out for admission controllers, which can intercept requests to the Kubernetes API and modify them (e.g. add limits, and other defaults).
Please explain the difference between ResourceQuota vs LimitRange objects in Kubernetes...?
LimitRange and ResourceQuota are objects used to control resource usage by a Kubernetes cluster administrator.
ResourceQuota is for limiting the total resource consumption of a namespace, for example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
LimitRangeis for managing constraints at a pod and container level within the project.
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "resource-limits"
spec:
limits:
-
type: "Pod"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "200m"
memory: "6Mi"
-
type: "Container"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "100m"
memory: "4Mi"
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
maxLimitRequestRatio:
cpu: "10"
An individual Pod or Container that requests resources outside of these LimitRange constraints will be rejected, whereas a ResourceQuota only applies to all of the namespace/project's objects in aggregate.
Resource Quotas
When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources. Resource quotas are a tool for administrators to address this concern.
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.
Resource quotas work like this:
Different teams work in different namespaces. This can be enforced with RBAC.
The administrator creates one ResourceQuota for each namespace.
Users create resources (pods, services, etc.) in the namespace, and the quota system tracks usage to ensure it does not exceed hard resource limits defined in a ResourceQuota.
If creating or updating a resource violates a quota constraint, the request will fail with HTTP status code 403 FORBIDDEN with a message explaining the constraint that would have been violated.
If quota is enabled in a namespace for compute resources like cpu and memory, users must specify requests or limits for those values; otherwise, the quota system may reject pod creation. Hint: Use the LimitRanger admission controller to force defaults for pods that make no compute resource requirements.
Limit Ranges
By default, containers run with unbounded compute resources on a Kubernetes cluster. Using Kubernetes resource quotas, administrators (also termed cluster operators) can restrict consumption and creation of cluster resources (such as CPU time, memory, and persistent storage) within a specified namespace. Within a namespace, a Pod can consume as much CPU and memory as is allowed by the ResourceQuotas that apply to that namespace. As a cluster operator, or as a namespace-level administrator, you might also be concerned about making sure that a single object cannot monopolize all available resources within a namespace.
A LimitRange is a policy to constrain the resource allocations (limits and requests) that you can specify for each applicable object kind (such as Pod or PersistentVolumeClaim) in a namespace.
A LimitRange provides constraints that can:
Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
Enforce a ratio between request and limit for a resource in a namespace.
Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.
A LimitRange is enforced in a particular namespace when there is a LimitRange object in that namespace.
Constraints on resource limits and requests:
The administrator creates a LimitRange in a namespace.
Users create (or try to create) objects in that namespace, such as Pods or PersistentVolumeClaims.
First, the LimitRange admission controller applies default request and limit values for all Pods (and their containers) that do not set compute resource requirements.
Second, the LimitRange tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace.
If you attempt to create or update an object (Pod or PersistentVolumeClaim) that violates a LimitRange constraint, your request to the API server will fail with an HTTP status code 403 Forbidden and a message explaining the constraint that has been violated.
If you add a LimitRange in a namespace that applies to compute-related resources such as cpu and memory, you must specify requests or limits for those values. Otherwise, the system may reject Pod creation.
LimitRange validations occur only at Pod admission stage, not on running Pods. 7.
If you add or modify a LimitRange, the Pods that already exist in that namespace continue unchanged.
If two or more LimitRange objects exist in the namespace, it is not deterministic which default value will be applied.
---------------------------------------------
So if you try to summarise ResourceQuota applies restriction with respect to CPU, memory for workloads and no of objects that can be created in a namespace. LimitRange defines default, maximum, minimum of CPU and memory consumption by workloads in a namespace. If you have a quota applied in a namespace, every Pod must request for resources like CPU and memory in their manifests. Otherwise the Pod creation will fail. But if you have a LimitRange enforced with default memory and CPU requests that could be avoided.
Source : Kubernetes Documentation