Scale the replica set using Labels - kubernetes

I could able to scale the replica set using the following
/apis/apps/v1/namespaces/{namespace}/deployments/{deployment}/scale
Is there a way that I can do scaling based on the specific label instead of namespaces and deployment.
I could find a way to get the deployments based on label
/apis/extensions/v1beta1/deployments?labelSelector={labelKey}={labelValue}
But couldn't find scaling using label.
Any help is appreciated.

You can scale Deployments, ReplicaSets, ReplicaConlrollers and StatefulSets using appropriate API:
/apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale
/apis/apps/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale
/apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale
/apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale
The idea is to find Deployment with required Labels using API /apis/extensions/v1beta1/deployments?labelSelector={labelKey}={labelValue},
and after that, use API /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale to scale.
You can implement this logic on ReplicaSets, ReplicaConlrollers and StatefulSets. But you need to remember, if you use Deployment, you need to scale it, not ReplicaConlroller created by it.

Related

How to scale down/up all deployments in a namespace except for one with an specific name on azure pipelines

I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas?
Something like:
If name contains "randomname" then do not scale up/down the service.
I did try some exceptions but still not working.
You can add a label on the one you want to exclude, and then use queries using labels and selectors to apply operations on the selected set of resources.

Group prometheus targets by kubernetes labels or annotations?

I have 2 questions about prometheus.
I use this helm chart.
https://artifacthub.io/packages/helm/prometheus-community/prometheus?modal=values
There is a job "kubernetes.pods". Everything works, but I want to make a few changes.
Is it possible to somehow group targets? For example, by label or annotation? For example, if the label exists, then add the target to the group, if it does not exist, then do not add it.
Now in prometheus they are grouped by job_name. But I don't want all pods to be displayed in one section. I want to separate these pods by label or annotation.
Is it possible?
How do I use the value from the label for relabel_configs?
Tried like this, doesn't work.
__meta_kubernetes_pod_label_vp
$__meta_kubernetes_pod_label_vp
${__meta_kubernetes_pod_label_vp}
Thanks.

scale stateful set with shared volume per az

I would like to scale a kubernetes stateful set in EKS which serves static data. The data should be shared where possible, which means by availability zone.
Can I specify a volume claim template to make this possible? Or is there another mechanism?
I also need to initialize the volume (in an init-container) when the first node joins. Do I need to provide some external locking mechanism rather than just checking if volume is empty?
If you want your pods to share static data, then you can:
Put the data into the Persistent Volume
Mount the same volume into the Pods (with ReadOnlyMany)
A few comments:
Here you can find the list of volumes, so you can choose the one that fits your needs
Since every pod serves the same data, you may use Deployment instead of StatefulSet, StatefulSets are when each of your pod is different
If you need the first pod to initialize the data, then you can either use initContainer (then use ReadWriteMany instead of ReadOnlyMany). Depending on what you try to do exactly, but maybe you can first initialize the data and then start your Deployment (and Pods), then you'd not need to lock anything

How to differentiate between equally-named Prometheus metrics from dynamically discovered micro-services in Kubernetes

I’m looking for a way to differentiate between Prometheus metrics gathered from different dynamically discovered services running in a Kubernetes cluster (we’re using https://github.com/coreos/prometheus-operator). E.g. for the metrics written into the db, I would like to understand from which service they actually came.
I guess you can do this via a label from within the respective services, however, swagger-stats (http://swaggerstats.io/) which we’re using does not yet offer this functionality (to enhance this, there is an issue open: https://github.com/slanatech/swagger-stats/issues/50).
Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?
Appreciate your feedback!
Is there a way to implement this over Prometheus itself, e.g. that Prometheus adds a service-specific label per time series after a scrape?
This is how Prometheus is designed to be used, as a target doesn't know how the monitoring system views it and prefixing metric names makes cross-service analysis harder. Both setting labels across an entire target and prefixing metric names are considered anti-patterns.
What you want is called a target label, these usually come from relabelling applied to metadata from service discovery.
When using the Prometheus Operator, you can specify targetLabels as a list of labels to copy from the Kubernetes Service to the Prometheus targets.

Kubernetes: send custom parameters per pod

I'm testing with a kubernetes cluster, and it's been marvelous to work with. But I got the following scenario:
I need to pass to each pod a custom value(s) just for that pod.
Let's say, I got deployment 1, and I define some env vars to that deployment, the env vars will go to each pod and that's good, but what I need is to send custom values that may go to a specific pod(like "to the third pod that I may create, send this").
This is what I got now:
Then, what I need is something like this:
Is there any artifact/feature I could use? It does not have to be an env var, it may be a configmap value, or anything. thanks in advance
Pods in a deployment are homogenous. If you want to set up a set of pods that are distinct from one another, you might want to use StatefulSet, which gives each pod an index you could use within the pod to select relevant config params
The real question here is how do you know what you want to put in particular pod in the first place. You could probably achieve something like this writing a custom initialiser for your pods. You could also have an init container prefetching information from central coordinator. To propose a solution, you need to figure it out in a "not a snowflake" way.