I am following up guide [1] to create multi-node K8S cluster which has 1 master and 2 nodes. Also, a label needs to set to each node respectively.
Node 1 - label name=orders
Node 2 - label name=payment
I know that above could be achieved running kubectl command
kubectl get nodes
kubectl label nodes <node-name> <label-key>=<label-value>
But I would like to know how to set label when creating a node. Node creation guidance is in [2].
Appreciate your input.
[1] https://coreos.com/kubernetes/docs/latest/getting-started.html
[2] https://coreos.com/kubernetes/docs/latest/deploy-workers.html
In fact there is a trivial way to achieve that since 1.3 or something like that.
What is responsible for registering your node is the kubelet process launched on it, all you need to do is pass it a flag like this --node-labels 'role=kubemaster'. This is how I differentiate nodes between different autoscaling groups in my AWS k8s cluster.
This answer is now incorrect (and has been for several versions of Kubernetes). Please see the correct answer by Radek 'Goblin' Pieczonka
There are a few options available to you. The easiest IMHO would be to use a systemd unit to install and configure kubectl, then run the kubectl label command. Alternatively, you could just use curl to update the labels in the node's metadata directly.
That being said, while I don't know your exact use case, the way you are using the labels on the nodes seems to be an effort to bypass some of Kubernetes key features, like dynamic scheduling of components across nodes. I would suggest rather than work on labeling the nodes automatically that you try to address why you need to identify the nodes in the first place.
I know this isn't creation time but, the following is pretty easy (labels follow pattern of key=value):
k label node minikube gpu.nvidia.com/model=Quadro_RTX_4000 node.coreweave.cloud/cpu=intel-xeon-v2
node/minikube labeled
Related
Is there any way to get a node labels from within a container for use as an environment variable?
It's similar to this https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/, but I need to use a label from the node for injecting into the container instead.
Thanks in advance!
You will not be able to get node labels without sending some requests to the k8s api server. You could do that - but that would mean every pod will need read access and that's not great security wise.
How about an alternate solution - if you need to make sure the pod is running on nodes with specific labels, you can use taints and tolerations to achieve that more easily.
I'm trying to figure out a way that a container or pod can know some specific information about the node that it's being scheduled to. For example, my container might have to know if a GPU is present or not on that node in order to decide whether or not to enable GPU acceleration. Another example would be knowing the $DISPLAY variable of the node to know what X server to output graphics to.
What's the best approach to this?
Thanks
Update: If I could get the node-name from within the container, I could do a lookup against an external service to get the information I need. Is there a way to do this?
OP Here. I've found a somewhat decent way of accomplishing this.
On setting the node up with my cluster i can install a script to source environment variables to a file then volume-mount that file into the container.
Alterntively I could also store config for each ndoe in a separate service and inject the nodeName to lookup properties of a specific node as follows:
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables
Then based on the name, my container can look-up via service or config map a mapping of nodeName to whatever information I need form the node. All I have to do is keep this service/config map up-to-date with the node's information.
Taints and Tolerations were designed for that.
HI I know there's a way i can pull out a problematic node out of loadbalancer to troubleshoot. But how can i pull a pod out of service to troubleshoot. What tools or command can do it ?
Change its labels so they no longer matches the selector: in the Service; we used to do that all the time. You can even put it back into rotation if you want to test a hypothesis. I don't recall exactly how quickly it takes effect, but I would guess "real quick" is a good approximation. :-)
## for example:
$ kubectl label pod $the_pod -app.kubernetes.io/name
## or, change it to non-matching
$ kubectl label pod $the_pod app.kubernetes.io/name=i-am-debugging-this-pod
As mentioned in Oreilly's "Kubernetes recipes: Maintenance and troubleshooting" page here
Removing a Pod from a Service
Problem
You have a well-defined service (see not available) backed by several
pods. But one of the pods is misbehaving, and you would like to take
it out of the list of endpoints to examine it at a later time.
Solution
Relabel the pod using the --overwrite option—this will allow you to
change the value of the run label on the pod. By overwriting this
label, you can ensure that it will not be selected by the service
selector (not available) and will be removed from the list of
endpoints. At the same time, the replica set watching over your pods
will see that a pod has disappeared and will start a new replica.
To see this in action, start with a straightforward deployment
generated with kubectl run (see not available):
For commands, check the recipes page mentioned above. There is also a section talking about "Debugging Pods" which will be helpful
I have 3 nodes in a k8s cluster and I need exactly 2 pods to be scheduled in each node, so I would end up having 3 nodes with 2 pods each (6 replicas).
I found that k8s have Pod Affinity/Anti-Affinity feature and that seems to be the correct way of doing.
My problem is: I want to run 2 pods per node but I often use kubectl apply to upgrade my docker image version, and in this case k8s should've be able to schedule 2 new images in each node before terminating the old ones - will the newer images be scheduled if I use Pod Affinity/Anti-Affinity to allow only 2 pods per node?
How can I do this in my deployment configuration? I cannot get it to work.
I believe it is part of kubelet's setting, so you would have to look into kubelet's --max-pods flag, depending on what your cluster configuration is.
The following links could be useful:
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#kubelet
and
https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
i am new to kubernetes and i have some functionally that i need to implement.
i need to set an env variable for only one docker container in a service.
for example- if i have 3 users containers then 1 of them need to have env variable named master
i did it with nomad. nomad set an env variable named NOMAD_ALLOC_INDEX, that give me the index of the container, this way i checked that if the container index was 0 then it is master.
i try find if kubernetes have a similar variable but didn't find anywhere.
i also try find in google an alternative solution but ended up with nothing.
any ideas of how i can achieve it ?
If you want sequential indexes, StatefulSet is your solution. Otherwise lookup kubernetes leader election, there are ways to solve it with ie. sidecar container performing leader election and exposing status via http call so you can curl localhost:port and see if the pod is master or not.