Kubernetes: send custom parameters per pod - kubernetes

I'm testing with a kubernetes cluster, and it's been marvelous to work with. But I got the following scenario:
I need to pass to each pod a custom value(s) just for that pod.
Let's say, I got deployment 1, and I define some env vars to that deployment, the env vars will go to each pod and that's good, but what I need is to send custom values that may go to a specific pod(like "to the third pod that I may create, send this").
This is what I got now:
Then, what I need is something like this:
Is there any artifact/feature I could use? It does not have to be an env var, it may be a configmap value, or anything. thanks in advance

Pods in a deployment are homogenous. If you want to set up a set of pods that are distinct from one another, you might want to use StatefulSet, which gives each pod an index you could use within the pod to select relevant config params

The real question here is how do you know what you want to put in particular pod in the first place. You could probably achieve something like this writing a custom initialiser for your pods. You could also have an init container prefetching information from central coordinator. To propose a solution, you need to figure it out in a "not a snowflake" way.

Related

How to scale down/up all deployments in a namespace except for one with an specific name on azure pipelines

I need to know a way to scale down all the deployments on a kubernetes namespace except for one with a specific string inside the name since it has dependencies. This on an AzureCLI task inside of an azure pipeline. Any ideas?
Something like:
If name contains "randomname" then do not scale up/down the service.
I did try some exceptions but still not working.
You can add a label on the one you want to exclude, and then use queries using labels and selectors to apply operations on the selected set of resources.

how to force сadvisor not to give empty labels in prometheus in cluster kubernetes?

we have a problem, we have cadvisor installed as a daemonset with hostport setup. We request metrics at , for example, worker5:31194/metrics and the request takes a very long time, about 40 seconds. As I understand it, the problem is related to the fact that cadvisor gives away extra empty labels.
looks like
container_cpu_cfs_periods_total{container_label_annotation_cni_projectcalico_org_containerID="",container_label_annotation_cni_projectcalico_org_podIP="",container_label_annotation_cni_projectcalico_org_podIPs="",container_label_annotation_io_kubernetes_container_hash="",container_label_annotation_io_kubernetes_container_ports="",container_label_annotation_io_kubernetes_container_preStopHandler="",container_label_annotation_io_kubernetes_container_restartCount="",container_label_annotation_io_kubernetes_container_terminationMessagePath="",container_label_annotation_io_kubernetes_container_terminationMessagePolicy="",container_label_annotation_io_kubernetes_pod_terminationGracePeriod="",container_label_annotation_kubernetes_io_config_seen="",container_label_annotation_kubernetes_io_config_source="",container_label_app="",container_label_app_kubernetes_io_component="",container_label_app_kubernetes_io_instance="",container_label_app_kubernetes_io_name="",container_label_app_kubernetes_io_version="",container_label_architecture="",container_label_build_date="",container_label_build_id="",container_label_com_redhat_build_host="",container_label_com_redhat_component="",container_label_com_redhat_license_terms="",container_label_control_plane="",container_label_controller_revision_hash="",container_label_description="",container_label_distribution_scope="",container_label_git_commit="",container_label_io_k8s_description="",container_label_io_k8s_display_name="",container_label_io_kubernetes_container_logpath="",container_label_io_kubernetes_container_name="",container_label_io_kubernetes_docker_type="",container_label_io_kubernetes_pod_name="",container_label_io_kubernetes_pod_namespace="",container_label_io_kubernetes_pod_uid="",container_label_io_kubernetes_sandbox_id="",container_label_io_openshift_expose_services="",container_label_io_openshift_tags="",container_label_io_rancher_rke_container_name="",container_label_k8s_app="",container_label_license="",container_label_maintainer="",container_label_name="",container_label_org_label_schema_build_date="",container_label_org_label_schema_license="",container_label_org_label_schema_name="",container_label_org_label_schema_schema_version="",container_label_org_label_schema_url="",container_label_org_label_schema_vcs_ref="",container_label_org_label_schema_vcs_url="",container_label_org_label_schema_vendor="",container_label_org_label_schema_version="",container_label_org_opencontainers_image_created="",container_label_org_opencontainers_image_description="",container_label_org_opencontainers_image_documentation="",container_label_org_opencontainers_image_licenses="",container_label_org_opencontainers_image_revision="",container_label_org_opencontainers_image_source="",container_label_org_opencontainers_image_title="",container_label_org_opencontainers_image_url="",container_label_org_opencontainers_image_vendor="",container_label_org_opencontainers_image_version="",container_label_pod_template_generation="",container_label_pod_template_hash="",container_label_release="",container_label_summary="",container_label_url="",container_label_vcs_ref="",container_label_vcs_type="",container_label_vendor="",container_label_version="",id="/kubepods/burstable/pod080e6da8-7f00-403d-a8de-3f93db373776",image="",name=""} 3.572708e+06
is there any solution to remove the empty label or remove the label altogether?
I found two parameters, the first one suited me, but you never know who will need the second, there is little information, so I decided to post the answer
-store_container_labels
convert container labels and environment variables into labels on prometheus
metrics for each container. If flag set to false, then only metrics exported are
container name, first alias, and image name (default true)
-whitelisted_container_labels string
comma separated list of container labels to be converted to labels on prometheus
metrics for each container. store_container_labels must be set to false for this to
take effect.

Group prometheus targets by kubernetes labels or annotations?

I have 2 questions about prometheus.
I use this helm chart.
https://artifacthub.io/packages/helm/prometheus-community/prometheus?modal=values
There is a job "kubernetes.pods". Everything works, but I want to make a few changes.
Is it possible to somehow group targets? For example, by label or annotation? For example, if the label exists, then add the target to the group, if it does not exist, then do not add it.
Now in prometheus they are grouped by job_name. But I don't want all pods to be displayed in one section. I want to separate these pods by label or annotation.
Is it possible?
How do I use the value from the label for relabel_configs?
Tried like this, doesn't work.
__meta_kubernetes_pod_label_vp
$__meta_kubernetes_pod_label_vp
${__meta_kubernetes_pod_label_vp}
Thanks.

scale stateful set with shared volume per az

I would like to scale a kubernetes stateful set in EKS which serves static data. The data should be shared where possible, which means by availability zone.
Can I specify a volume claim template to make this possible? Or is there another mechanism?
I also need to initialize the volume (in an init-container) when the first node joins. Do I need to provide some external locking mechanism rather than just checking if volume is empty?
If you want your pods to share static data, then you can:
Put the data into the Persistent Volume
Mount the same volume into the Pods (with ReadOnlyMany)
A few comments:
Here you can find the list of volumes, so you can choose the one that fits your needs
Since every pod serves the same data, you may use Deployment instead of StatefulSet, StatefulSets are when each of your pod is different
If you need the first pod to initialize the data, then you can either use initContainer (then use ReadWriteMany instead of ReadOnlyMany). Depending on what you try to do exactly, but maybe you can first initialize the data and then start your Deployment (and Pods), then you'd not need to lock anything

Is there a way to generate a unique identifier value only once (if not set) and cannot be edited by user

I want to generate a unique identifier of a specific length and use this value across multiple pods internally. Since the length must be specific, and I'd prefer this to be handled internally rather than be adjustable by a user, I'd prefer to create the unique identifier on install/upgrade (only once if has not been set already) and not be changeable.
I want to use the identifiers internally as part of a naming schema for objects created within a specific deployment. I want to share these objects across other deployments, and need the identifier to determine if a given object belongs to a given deployment.
I was looking into setting a value in Secrets using randAlphaNum. Some problems I face with using Secrets are:
Related to this issue: https://github.com/helm/helm/issues/3053
It looks like the Secret value will be overwritten on upgrade. There is an open PR for a possible fix: https://github.com/helm/helm/pull/5290
But I don't have the ability to upgrade helm/kubernetes atm
Secret value is b64 encoded. I want to pass the value as an environment variable to various pods decoded. It doesn't really matter if the user knows the unique identifier. So, maybe I don't need a Secret? But, again, I don't want the user to be able to edit the value and the value should never change for a given deployment.
Any help or suggestions are appreciated! Thanks
You may then try to use ConfigMap instead. Seems it doesn't change on helm upgrade. Then you can use this guide to pass the value from ConfigMap to the pods.