EKS Node Group Terraform - Add label to specific node - kubernetes

I'm provisioning EKS with managed nodes through Terraform. No issues there, it's all working fine.
My problem is that I want to add a label to one of my nodes to use as a nodeSelector in one of my deployments. I have an app that is backed by an EBS persistent volume which obviously is only available in a single AZ, so I want my pod to schedule there.
I can add a label pretty easily with:
kubectl label nodes <my node> <key>=<value>
And actually this is fine, that is until you do something like update the node group to the next version. The labels don't persist, which makes sense as they are not managed by Amazon.
Is there a way, either through terraform or something else to set these labels and make them persist. I notice that the EKS provider for Terraform has a labels option, but it seems like that will add the label to all nodes in the Node Group, and that's not what I want. I've looked around, but can't find anything.

You may not need to add a label to a specific node to solve your problem. Amazon as a cloud provider adds some Kubernetes labels to each node in a managed node group. Example:
labels:
failure-domain.beta.kubernetes.io/region: us-east-1
failure-domain.beta.kubernetes.io/zone: us-east-1a
kubernetes.io/hostname: ip-10-10-10-10.ec2.internal...
kubernetes.io/os: linux
topology.ebs.csi.aws.com/zone: us-east-1a
topology.kubernetes.io/region: us-east-1
topology.kubernetes.io/zone: us-east-1a
The exact labels available to you will depend on the version of Kubernetes you are running. Try running kubectl get nodes -o json | jq '.items[].metadata.labels' to see the labels set on each node in your cluster.
I recommend using topology.kubernetes.io/zone to match the availability zone containing your EBS volume. According to the Kubernetes documentation, both nodes and persistent volumes should have this label populated by the cloud provider.
Hope this helps. Let me know if you still have questions.

You can easily achieve that with Terraform:
resource "aws_eks_node_group" "example" {
...
labels = {
label_key = "label_value"
}
}

Add a second node group (with the desired node info) and label that node group.

Related

List all kubernetes nodes within a specific nodepool. How can this be done?

I am looking for a way to list all nodes in a specific nodepool but not able to find any examples.
Is there a way to specify meta data with json? Or use an awk filter?
The following will list all nodes in the cluster
kubectl get nodes
Nodes have label with their node group.
To print a list of all of yours nodes and their node group run:
> kubectl get node -o=custom-columns='node_name:metadata.name, node_group:metadata.labels.cloud\.google\.com/gke-nodepool'
node_name node_group
gke-ml-prd-default-node-pool-03d09fca-jj3x default-node-pool
gke-ml-prd-default-node-pool-4649f5b7-j1qc default-node-pool
gke-ml-prd-default-node-pool-4a9ff740-2my4 default-node-pool
gke-ml-prd-default-node-pool-9a199b2d-q573 default-node-pool
Of course you need to change the label according to your cloud provider.
Take a look on the labels of one of your nodes with kubectl describe <node_name> and find the relevant label name.
These are names in the popular cloud providers.
gke: cloud.google.com/gke-nodepool
eks: eks.amazonaws.com/nodegroup
aks: kubernetes.azure.com/agentpool
You can also filter for specific group with
kubectl get node --selector='cloud.google.com/gke-nodepool=default-node-pood'

How to configure an Ingress to access all pods from a DaemonSet?

I'm using hardware-dependents pods; in my K8s, I instantiate my pods with a DaemonSet.
Now I want to access those pods with an URL like https://domain/{pod-hostname}/
My use case is a bit more tedious than this one. my pods' names are not predefined.
Moreover, I also need a REST entry point to list my pod's name or hostname.
I publish a Docker Image to solve my issue: urielch/dyn-ingress
My YAML configuration is in the Docker doc.
This Container add label on each pod, then use this label to create a service per pod, and then update an existing Ingress to reach each node with a path //
feel free to test it.
the source code is here

Auto assign predefined env vars \ mounts to every pod (including future ones) on a cluster

Problem:
I want every pod created in my cluster to hold\point the same data
e.g. let's say I want all of them to have an env vars like "OWNER=MYNAME".
there are multiple users in my cluster and I don't want them to start changing their YAMLs and manually assign OWNER:MYNAME to env.
Is there a way to have all current/future pods to be assigned automatically with a predefined value or mount a configmap so that the same information will be available in every single pod?
can this be done on the cluster level? namespace level?
I want it to be transparent to the user, meaning a user would apply whatever pod to the cluster, and the info could be available to him without even asking.
Thanks, everyone!
Pod Preset might help you here to partially achieve what you need. Pod Preset resource allows injecting additional runtime requirements into a Pod at creation time. You use label selectors to specify the Pods to which a given PodPreset applies.
Check this to know how pod preset works.
First you need to enable pod preset in your cluster.
You can use Pod Preset to inject env variables or volumes in your pod.
You can also inject configmap in your pod.
Make use of some common label for all the pods which you want to have common config, use this common label in your pod preset resource.
Unfortunately there are plans to remove pod presets altogether in coming releases, but I guess you can still use it with current releases. Although there are other implementations similar to pod presets, which you can try.

Having issue while creating custom dashboard in Grafana( data-source is Prometheus)

I have setup Prometheus and Grafana for monitoring my kubernetes cluster and everything works fine. Then I have created custom dashboard in Grafana for my application.The metrics available in Prometheus is as follows and i have added the same in grafana as metrics:
sum(irate(container_cpu_usage_seconds_total{namespace="test", pod_name="my-app-65c7d6576b-5pgjq", container_name!="POD"}[1m])) by (container_name)
The issue is, my application is running as pod in kubernetes,so when the pod is deleted or recreated, then the name of the pod will change and it will be different than the pod name specified in the above metrics "my-app-65c7d6576b-5pgjq". So the data for the above metrics will not work anymore. and I have to add new metrics again in Grafana. Please let me know How can I overcome this situation.
Answer was provided by manu thankachan:
I have done it. Made some change in the query as follow:
sum(irate(container_cpu_usage_seconds_total{namespace="test",
container_name="my-app", container_name!="POD"}[1m])) by
(container_name)
If pod is created directly(not a part of deployment) then only pod name is same as we mentioned.
If pod is part of Deployment the it will have unique string from replicaset and also ends with random 5 characters to maintain unique name.
So always try to use container_name label or if your Kubernetes version is > v1.16.0 then use container label

How to set label to Kubernetes node at creation time?

I am following up guide [1] to create multi-node K8S cluster which has 1 master and 2 nodes. Also, a label needs to set to each node respectively.
Node 1 - label name=orders
Node 2 - label name=payment
I know that above could be achieved running kubectl command
kubectl get nodes
kubectl label nodes <node-name> <label-key>=<label-value>
But I would like to know how to set label when creating a node. Node creation guidance is in [2].
Appreciate your input.
[1] https://coreos.com/kubernetes/docs/latest/getting-started.html
[2] https://coreos.com/kubernetes/docs/latest/deploy-workers.html
In fact there is a trivial way to achieve that since 1.3 or something like that.
What is responsible for registering your node is the kubelet process launched on it, all you need to do is pass it a flag like this --node-labels 'role=kubemaster'. This is how I differentiate nodes between different autoscaling groups in my AWS k8s cluster.
This answer is now incorrect (and has been for several versions of Kubernetes). Please see the correct answer by Radek 'Goblin' Pieczonka
There are a few options available to you. The easiest IMHO would be to use a systemd unit to install and configure kubectl, then run the kubectl label command. Alternatively, you could just use curl to update the labels in the node's metadata directly.
That being said, while I don't know your exact use case, the way you are using the labels on the nodes seems to be an effort to bypass some of Kubernetes key features, like dynamic scheduling of components across nodes. I would suggest rather than work on labeling the nodes automatically that you try to address why you need to identify the nodes in the first place.
I know this isn't creation time but, the following is pretty easy (labels follow pattern of key=value):
k label node minikube gpu.nvidia.com/model=Quadro_RTX_4000 node.coreweave.cloud/cpu=intel-xeon-v2
node/minikube labeled