K8S persistent volume resizing dynamically using rules - kubernetes

Read this excellent article:https://kubernetes.io/blog/2018/08/02/dynamically-expand-volume-with-csi-and-kubernetes/
Does k8s support dynamic auto resize based on some rules i.e. If the volume space is 90% used, auto resize the volume upto a configured max limit. From above doc dynamic resize is supported but I dont see a way to define the rule (e.g. > 90% PV disk usage) or set the max limit (e.g. dont resize if you reach 300GB). It looks like the resize command needs to be manually executed. Is my understanding correct ?
I know we could implement this by monitoring kubelete_volume_* metrics , Define a Alert based on our threshold, And have the alert event trigger the PV resize API.
But it could be cool if K8S supported this out of the box. This is very similar to auto scaling the containers based on load rules we define. They just need to extend this concept to volumes

Related

Prometheus alerting rules that includes each Kubernetes volume memory utilization

I would like to create a Prometheus alert that notifies me via Alertmanager when my Kubernetes volumes are for example 80% or 90% occupied. Currently I am using the following expression:
100 / kubelet_volume_stats_capacity_bytes{persistentvolumeclaim="pvc-name"} * kubelet_volume_stats_available_bytes{persistentvolumeclaim="pvc-name"} > 80
The problem is, however, that I have to create the same alarm again in a slightly modified form for each claim. If in addition the name of a claim changes I have to adjust the rules as well.
Question: Is there a clever way to create an alert that takes into account all available claims? So that there is no need to change the alert if a claim changes.
Kind regards!

Kubernetes API custom image metadata

I try to use the Kubernetes API to read metadata via annotations from container images. The metadata is applicable to every instance of the respecting image and is needed in order to run any resulting container properly. Following this SO question it is not possible to read Docker image labels from the kubernetes API directly.
My next thought was to use custom annotations added to the image manifest, although this seems to be a pretty hacky solution for such a "simple" task. Anyway if I add the annotations to the manifest using docker, I see no way to read them from the Kubernetes API.
I think I am on the completely wrong track here. This seems to be a rather simple task which other people likely have implemented already...anyway I cannot find any further information regarding this. Is it really that hard to read image metadata via kubernetes before deploying a container of that image?
Thanks in advance for any help!
Edit:
The reason I am asking is because I want to grant the containers of specific images access to specific serial USB devices (e.g. FTDI232) on diverse host systems. Since I have no idea which path (e.g. /dev/ttyUSB0) will be assigned to the USB devices, I wrote a program that is monitoring USB devices and, in case an appropriate device is plugged in or gets plugged in, creates the container and passes it the corresponding path. From inside the container I want to access the serial device via a static, non-changing path (e.g. /dev/FTDI232)
Yes. The K8s API is limited when it comes to this, I believe the abstractions for container image metadata are at lower level and probably left out for a reason. You can always look at the CRI spec to see what's supported (note that the doc is out of date so you might have to look at the code).
If the end goal is to use Kubernetes to run your workloads it sounds like the more feasible route here is just to write a script that reads that image manifest outside Kubernetes and create the manifest files that you need to run your workloads after (based on that metadata) and then finally apply it to your cluster.
If you are using a common container image registry you could also write something that pulls the images from that registry to just pick metadata and metadata changes.

How to apply upload limit for google storage bucket per day/month/etc

Is there a way how to apply upload limit for google storage bucket per day/month/year?
Is there is a way how to apply limit on amount of Network traffic?
Is there is a way how to apply limit on Class A operations?
Is there is a way how to apply limit on Class B operations?
I found only Queries per 100 seconds per user and Queries per day using
https://cloud.google.com/docs/quota instructions, but this is JSON Api quotas
(I even not sure what kind of api is used inside of StorageClient c# client class)
For defining Quotas, and by the way SLO, you need to have SLI: Service level indicator. that means to have metrics on what you want to observe.
Here, it's not the case. Cloud Storage haven't indicator on the volume of data per day. Thus, you don't have built in indicator and metrics, ... and quotas.
If you want it, you have to build something by your own. To wrap all the Cloud Storage call in a service that count the volume of blob per days and then you will be able to apply your own rules on this personal indicator.
Of course, for preventing any by pass, you have to deny direct access to the buckets and only grant your "indicator service" to access them. Same things for the bucket creation, to register the new buckets in your service.
Not an easy task...

How to display scheduling policy in kube-scheduler

My question is how to display the current predicates and priority policies used by kube-scheduler.
Note that I didn't change or overwrite them, so am wondering how can I display them, or anyone can tell me what is the default policy used in filtering and scoring.

Stackdriver custom metric aggregate alerts

I'm using Kubernetes on Google Compute Engine and Stackdriver. The Kubernetes metrics show up in Stackdriver as custom metrics. I successfully set up a dashboard with charts that show a few custom metrics such as "node cpu reservation". I can even set up an aggregate mean of all node cpu reservations to see if my total Kubernetes cluster CPU reservation is getting too high. See screenshot.
My problem is, I can't seem to set up an alert on the mean of a custom metric. I can set up an alert on each node, but that isn't what I want. I can also set up "Group Aggregate Threshold Condition", but custom metrics don't seem to work for that. Notice how "Custom Metric" is not in the dropdown.
Is there a way to set an alert for an aggregate of a custom metric? If not, is there some way I can alert when my Kubernetes cluster is getting too high on CPU reservation?
alerting on an aggregation of custom metrics is currently not available in Stackdriver. We are considering various solutions to the problem you're facing.
Note that sometimes it's possible to alert directly on symptoms of the problem rather than monitoring the underlying resources. For example, if you're concerned about cpu because X happens and users notice, and X is bad - you could consider alerting on symptoms of X instead of alerting on cpu.