I tried to build angular js application in my openshidt free account :
openshift.com
The issue is when I build angular project dependencies are so big that I need 1gi for the build.
And I don't know why openshift limit the memory to 25%.
I tried to add resources in my config :
But still limit to 25%
resources:
requests:
cpu: "100m"
memory: "1Gi"
Hope you have any idea for this.
Thanks
François
Setting the memory request does not have an effect on OpenShift Online Starter (the "free account on openshift.com"). The limit would default to 512 MiB and other values (requests, CPU limit) will be set by the ClusterResourceOverride admission controller that is in place. In order to have your build use up to 1 GiB memory, you should specify only the memory limit within the build configuration:
resources:
limits:
memory: 1Gi
Look at LimitRange object in your namespace. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ That might be limiting your max memory.
Look at kubectl get -o=yaml POD_NAME for your pod, is your configuration actually taking effect?
If the answer to #2 is no, then maybe delete and re-apply your pod to the system.
Related
I'm new with monitoring tools like Prometheus and Grafana and I would like to create dashboard which represents current requests and limits resources and usage for a pod. In addition, this pod has 2 containers inside.
My resources for first container looks like:
resources:
requests:
cpu: "3800m"
memory: "9500Mi"
limits:
cpu: "6500m"
memory: "9500Mi"
and for second container:
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 50m
memory: 50Mi
When executing this query in Prometheus:
rate(container_cpu_usage_seconds_total{pod=~"MY_POD"}[5m])
I get:
And to be honest I dont know how this returned data are valid with resources. On Grafana it looks like this:
In addition I would like to add informations about requests and limits to dashboard, but I don't know how to scale dashboard to show all data.
When I execute this query: kube_pod_container_resource_requests{pod=~"MY_POD"} I get:
And this looks valid compared to my resources. I have proper value for limits too, but I would like to represent all this data (usage, requests, limits) on ona dashboard. Could somebody give my any tips how to achieve this?
Simple, just add 2 more queries
if you don't know where to add, below Metrics Browser and Options tab you
can find + symbol to add more queries
and container_cpu_usage_seconds_total this metrics is a counter and
this will give you the CPU Usage in Cores.
kube_pod_container_resource_requests{pod=~"MY_POD"} and kube_pod_container_resource_limits{pod=~"MY_POD"}. If it's only one Pod means no issues. But if you're having mutliple Pods means try to use Sum
Query A: sum(rate(container_cpu_usage_seconds_total{pod=~"MY_POD"}[5m]))
Query B: sum(kube_pod_container_resource_requests{pod=~"MY_POD"})
Query C: sum(kube_pod_container_resource_limits{pod=~"MY_POD"})
This will looks good without too much details, for more details like container wise data just create three more Panel for Requests, Limits and Usage by Containers and add by(container) after the every Query
Another Approach:
Create Variables for Pod and Container so that you can select the Container which you want to see and add 3 queries in single panel so that the Panel looks more Dynamic and less Noise
I have created a deployment with the following resources:
resources:
requests:
memory: "128Mi"
cpu: "0.45"
limits:
memory: "128Mi"
cpu: "0.8"
Using the minikube metrics server I can see that my pod CPU usage is below the requested of 450m and is only using around 150m. Shouldn't it always use 450m as a minimum value since I requested it in my .yaml file? The CPU usage goes up only if I dramatically increase the workload of the deployment. Can I have my deployment use 450m as baseline and not go below that value?
The requested value is a hint for the scheduler to help good placement of the workload. If your application does not make use of the requested resources, this is fine.
The limit will ensure no more resources are used: For CPU it will be throttled, if more RAM is used, the workload is killed (out of memory).
I have a container running in a GKE autopilot K8s cluster. I have the following in my deployment manifest (only relevant parts included):
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
resources:
requests:
memory: "250Mi"
cpu: "512m"
So I've requested the minimum resources that GKE autopilot allows for normal pods. Note that I have not specified a limits.
However, having applied the manifest and looking at the yaml I see that it does not match what's in the manifest I applied:
resources:
limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
requests:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
Any idea what's going on here? Why has GKE scaled up the resources. This is costing me more money unnecessarily?
Interestingly it was working as intended until recently. This behaviour only seemed to start in the past few days.
If the resources that you've requested are following:
memory: "250Mi"
cpu: "512m"
Then they are not compliant with the minimal amount of resources that GKE Autopilot will assign. Please take a look on the documentation:
NAME
Normal Pods
CPU
250 mCPU
Memory
512 MiB
Ephemeral storage
10 MiB (per container)
-- Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Allowable resource ranges
As you can see the amount of memory you've requested was too small and that's why you saw the following message (and the manifest was modified to increate the requests/limits):
Warning: Autopilot increased resource requests for Deployment default/XYZ to meet requirements. See http://g.co/gke/autopilot-resources.
To fix that you will need to assign resources that are within the limits of the documentation, I've included in the link above.
In kubernetes I can currently limit the CPU and Memory of the containers, but what about the hard disk size of the containers.
For example, how could I avoid that someone runs a container in my k8s worker node that stores internally .jpg files making this container grow and grow with the time.
Not talking about Persistent Volume and Persistent Volume Claim. I'm talking that if someone makes an error in the container and write inside the container filesystem I want to control it.
Is there a way to limit the amount of disk used by the containers?
Thank you.
There is some support for this; the tracking issues are #361, #362 and #363. You can define requests and/or limits on the resource called ephemeral-storage, like so (for a Pod/PodTemplate):
spec:
containers:
- name: foo
resources:
requests:
ephemeral-storage: 50Mi
limits:
ephemeral-storage: 50Mi
The page Reserve Compute Resources for System Daemons has some additional information on this feature.
I'm running a small node in gcloud with 2 pods running. Google cloud console shows all resources utilization
<40% cpu utilization
about 8k n\w bytes
about 64 disk bytes.
When adding the next pod, it fails with below error.
FailedScheduling:Failed for reason PodExceedsFreeCPU and possibly others
Based on the numbers I see in google console, ~60% CPU is available. is there anyway to get more logs? Am I missing something obvious here?
Thanks in advance !
As kubernetes reserve some space if more cpu or memory is needed you should check the capacity allocated by the cluster instead of the utilization.
kubectl describe nodes
You can find a deeper description about the capacity of the nodes in: http://kubernetes.io/docs/user-guide/compute-resources/
In your helm chart or Kubernetes yaml, check the resources section. Even if you have free capacity, if your request would put the cluster over, even if your pod etc wouldn't actually use that much, it will fail to schedule. The request is asking for a reservation of capacity. IE:
spec:
serviceAccountName: xxx
containers:
- name: xxx
image: xxx
command:
- cat
tty: true
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "250m"
If the value for cpu there could make the cluster oversubscribed, it won't schedule the pod. So make sure your request reflect actual typical usage. If your requests do reflect actual typical usage, then you need more capacity.