Group prometheus targets by kubernetes labels or annotations? - kubernetes

I have 2 questions about prometheus.
I use this helm chart.
https://artifacthub.io/packages/helm/prometheus-community/prometheus?modal=values
There is a job "kubernetes.pods". Everything works, but I want to make a few changes.
Is it possible to somehow group targets? For example, by label or annotation? For example, if the label exists, then add the target to the group, if it does not exist, then do not add it.
Now in prometheus they are grouped by job_name. But I don't want all pods to be displayed in one section. I want to separate these pods by label or annotation.
Is it possible?
How do I use the value from the label for relabel_configs?
Tried like this, doesn't work.
__meta_kubernetes_pod_label_vp
$__meta_kubernetes_pod_label_vp
${__meta_kubernetes_pod_label_vp}
Thanks.

Related

How to scale down/up all deployments in a namespace except for one with an specific name on azure pipelines

I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas?
Something like:
If name contains "randomname" then do not scale up/down the service.
I did try some exceptions but still not working.
You can add a label on the one you want to exclude, and then use queries using labels and selectors to apply operations on the selected set of resources.

how to force сadvisor not to give empty labels in prometheus in cluster kubernetes?

we have a problem, we have cadvisor installed as a daemonset with hostport setup. We request metrics at , for example, worker5:31194/metrics and the request takes a very long time, about 40 seconds. As I understand it, the problem is related to the fact that cadvisor gives away extra empty labels.
looks like
container_cpu_cfs_periods_total{container_label_annotation_cni_projectcalico_org_containerID="",container_label_annotation_cni_projectcalico_org_podIP="",container_label_annotation_cni_projectcalico_org_podIPs="",container_label_annotation_io_kubernetes_container_hash="",container_label_annotation_io_kubernetes_container_ports="",container_label_annotation_io_kubernetes_container_preStopHandler="",container_label_annotation_io_kubernetes_container_restartCount="",container_label_annotation_io_kubernetes_container_terminationMessagePath="",container_label_annotation_io_kubernetes_container_terminationMessagePolicy="",container_label_annotation_io_kubernetes_pod_terminationGracePeriod="",container_label_annotation_kubernetes_io_config_seen="",container_label_annotation_kubernetes_io_config_source="",container_label_app="",container_label_app_kubernetes_io_component="",container_label_app_kubernetes_io_instance="",container_label_app_kubernetes_io_name="",container_label_app_kubernetes_io_version="",container_label_architecture="",container_label_build_date="",container_label_build_id="",container_label_com_redhat_build_host="",container_label_com_redhat_component="",container_label_com_redhat_license_terms="",container_label_control_plane="",container_label_controller_revision_hash="",container_label_description="",container_label_distribution_scope="",container_label_git_commit="",container_label_io_k8s_description="",container_label_io_k8s_display_name="",container_label_io_kubernetes_container_logpath="",container_label_io_kubernetes_container_name="",container_label_io_kubernetes_docker_type="",container_label_io_kubernetes_pod_name="",container_label_io_kubernetes_pod_namespace="",container_label_io_kubernetes_pod_uid="",container_label_io_kubernetes_sandbox_id="",container_label_io_openshift_expose_services="",container_label_io_openshift_tags="",container_label_io_rancher_rke_container_name="",container_label_k8s_app="",container_label_license="",container_label_maintainer="",container_label_name="",container_label_org_label_schema_build_date="",container_label_org_label_schema_license="",container_label_org_label_schema_name="",container_label_org_label_schema_schema_version="",container_label_org_label_schema_url="",container_label_org_label_schema_vcs_ref="",container_label_org_label_schema_vcs_url="",container_label_org_label_schema_vendor="",container_label_org_label_schema_version="",container_label_org_opencontainers_image_created="",container_label_org_opencontainers_image_description="",container_label_org_opencontainers_image_documentation="",container_label_org_opencontainers_image_licenses="",container_label_org_opencontainers_image_revision="",container_label_org_opencontainers_image_source="",container_label_org_opencontainers_image_title="",container_label_org_opencontainers_image_url="",container_label_org_opencontainers_image_vendor="",container_label_org_opencontainers_image_version="",container_label_pod_template_generation="",container_label_pod_template_hash="",container_label_release="",container_label_summary="",container_label_url="",container_label_vcs_ref="",container_label_vcs_type="",container_label_vendor="",container_label_version="",id="/kubepods/burstable/pod080e6da8-7f00-403d-a8de-3f93db373776",image="",name=""} 3.572708e+06
is there any solution to remove the empty label or remove the label altogether?
I found two parameters, the first one suited me, but you never know who will need the second, there is little information, so I decided to post the answer
-store_container_labels
convert container labels and environment variables into labels on prometheus
metrics for each container. If flag set to false, then only metrics exported are
container name, first alias, and image name (default true)
-whitelisted_container_labels string
comma separated list of container labels to be converted to labels on prometheus
metrics for each container. store_container_labels must be set to false for this to
take effect.

Can't create multiple labels with same key in Kubernetes 1.19

Is there a way to have the same label key but different values for a pod. For example, can a pod have labels as "app=db" and "app=web". I tried to use kubectl label command but it picks only one label.
Labels are a map[string]string so you are correct, this is not possible.

Scale the replica set using Labels

I could able to scale the replica set using the following
/apis/apps/v1/namespaces/{namespace}/deployments/{deployment}/scale
Is there a way that I can do scaling based on the specific label instead of namespaces and deployment.
I could find a way to get the deployments based on label
/apis/extensions/v1beta1/deployments?labelSelector={labelKey}={labelValue}
But couldn't find scaling using label.
Any help is appreciated.
You can scale Deployments, ReplicaSets, ReplicaConlrollers and StatefulSets using appropriate API:
/apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale
/apis/apps/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale
/apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale
/apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale
The idea is to find Deployment with required Labels using API /apis/extensions/v1beta1/deployments?labelSelector={labelKey}={labelValue},
and after that, use API /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale to scale.
You can implement this logic on ReplicaSets, ReplicaConlrollers and StatefulSets. But you need to remember, if you use Deployment, you need to scale it, not ReplicaConlroller created by it.

Kubernetes: send custom parameters per pod

I'm testing with a kubernetes cluster, and it's been marvelous to work with. But I got the following scenario:
I need to pass to each pod a custom value(s) just for that pod.
Let's say, I got deployment 1, and I define some env vars to that deployment, the env vars will go to each pod and that's good, but what I need is to send custom values that may go to a specific pod(like "to the third pod that I may create, send this").
This is what I got now:
Then, what I need is something like this:
Is there any artifact/feature I could use? It does not have to be an env var, it may be a configmap value, or anything. thanks in advance
Pods in a deployment are homogenous. If you want to set up a set of pods that are distinct from one another, you might want to use StatefulSet, which gives each pod an index you could use within the pod to select relevant config params
The real question here is how do you know what you want to put in particular pod in the first place. You could probably achieve something like this writing a custom initialiser for your pods. You could also have an init container prefetching information from central coordinator. To propose a solution, you need to figure it out in a "not a snowflake" way.