I would like to ask you where should I look for some information on how to force deployments from some namespace to run on specific nodes in the k3s cluster.
I found this: How to assign a namespace to certain nodes?.
Which is the simmilar issue that I have or want.
But I didn't have luck of finding this PodNodeSelector plugin in my k3s cluster. Can you point me where should I look, because I tried documentation and didn't find what I was looking for.
Desired state:
testing namespace - able to do
nodes tagged test - able to do
kube config to namespace - able to do
allow deployments to run just on the tagged nodes - asking
Thanks for your suggestions and have a nice day.
Related
I have deployed a k8s service, however its not showing any pods. This is what I see
kubectl get deployments
It should create on the default namespace
kubectl get nodes (this shows me nothing)
How do I troubleshoot a failed deployment. The test-control-plane is the one deployed by kind this is the k8s one I'm using.
kubectl get nodes
If above command is not showing anything which mean there is no Nodes in your cluster so where your workload will run ?
You need to have at least one worker node in K8s cluster so deployment can schedule the POD on it and run the application.
You can check worker node using same command
kubectl get nodes
You can debug more and check the reason of issue further using
kubectl describe deployment <name of your deployment>
To find out what really went wrong, first follow the steps described in Harsh Manvar in his answer. Perhaps obtaining that information can help you find the problem. If not, check the logs of your deployment. Try to list your pods and see which ones did not boot properly, then check their logs.
You can also use the kubectl describe on pods to see in more detail what went wrong. Since you are using kind, I include a list of known errors for you.
You can also see this visual guide on troubleshooting Kubernetes deployments and 5 Tips for Troubleshooting Kubernetes Deployments.
Summarize the problem:
Any way we can add an ENV to a pod or a new pod in kubernetes?
For example, I want to add HTTP_PROXY to many pods and the new pods it will generate in kubeflow 1.4. So these pods can be access to internet.
Describe what you’ve tried:
I searched and found istio maybe do that, but it's too complex for me.
The second, there are too many yamls in kubeflow, as to I cannot modify them one by one to use configmap or add ENV just in them.
So anyone has a good simle way to do this? Like doing this in kubernetes configuation.
Use "PodPreset" object to inject common environment variables and other params to all the matching pods.
Please follow below article
https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/
If PodPreset is indeed removed from v1.20, then you seem to need a webhook.
You will have to run an additional service in your cluster that will change the configuration of the pods.
Here is an example, on the basis of which I created my webhook, which changed the configuration of the pods in the cluster, in this example the developer used the logic adding a sidecar to the pod, but you can set your own to forward the required ENV:
https://github.com/morvencao/kube-mutating-webhook-tutorial/blob/master/medium-article.md
I'm new to Kubernetes (K8s). It's my understanding that in order to "do things" in a kubernetes cluster, we interact with a kuberentes REST API endpoint and create/update/delete objects. When these objects are created/updated/deleted K8s will see those changes and take steps to bring the system in-line with the state of your objects.
In other words, you tell K8s you want a "deployment object" with container image foo/bar and 10 replicas and K8s will create 10 running pods with the foo/bar image. If you update the deployment to say you want 20 replicas, K8s will start more pods.
My Question: Is there a canonical description of all the possible configuration fields for these objects? That is -- tutorials liks this one do a good job of describing the simplest possible configuration to get an object like a deployment working, but now I'm curious what else it's possible to do with deployments that go beyond these hello world examples.
Is there a canonical description of all the possible configuration fields for these objects?
Yes, there is the Kubernetes API reference e.g. for Deployment.
But when developing, the easiest way is to use kubectl explain <resource> and navigate deeper, e.g:
kubectl explain Deployment.spec
and then deeper, e.g:
kubectl explain Deployment.spec.template
Kubernetes kubeflow scaling is not working
I have installed kubernetes, kubectl and ksonnet as per suggested.
I have created kubeflow namespace and deployed kubeflow core components.
Then, I have created ksonnet app and namespace and h2o3-scaling component.
Then, I have tried to run some examples. Everything is working fine.
I have followed all the stepes provided by this url https://github.com/h2oai/h2o-kubeflow
But horizontal scaling is not working as expected.
Thanks in advance. Please help anyone to solve this problem.
I'm not sure about H2O3, but Kubeflow itself doesn't really support autoscaling. There are few components:
Tf-operator - it doesn't run training itself, it runs pods that run training and you specify number of replicas in TFJob definition, so no autoscaling.
Tf-serving - potentially could do autoscaling, but we don't right now, again you specify replicas.
Jupyterhub - same as tf-operator, spawns pods, don't autoscale.
What is exact use case you're aiming for?
I am following up guide [1] to create multi-node K8S cluster which has 1 master and 2 nodes. Also, a label needs to set to each node respectively.
Node 1 - label name=orders
Node 2 - label name=payment
I know that above could be achieved running kubectl command
kubectl get nodes
kubectl label nodes <node-name> <label-key>=<label-value>
But I would like to know how to set label when creating a node. Node creation guidance is in [2].
Appreciate your input.
[1] https://coreos.com/kubernetes/docs/latest/getting-started.html
[2] https://coreos.com/kubernetes/docs/latest/deploy-workers.html
In fact there is a trivial way to achieve that since 1.3 or something like that.
What is responsible for registering your node is the kubelet process launched on it, all you need to do is pass it a flag like this --node-labels 'role=kubemaster'. This is how I differentiate nodes between different autoscaling groups in my AWS k8s cluster.
This answer is now incorrect (and has been for several versions of Kubernetes). Please see the correct answer by Radek 'Goblin' Pieczonka
There are a few options available to you. The easiest IMHO would be to use a systemd unit to install and configure kubectl, then run the kubectl label command. Alternatively, you could just use curl to update the labels in the node's metadata directly.
That being said, while I don't know your exact use case, the way you are using the labels on the nodes seems to be an effort to bypass some of Kubernetes key features, like dynamic scheduling of components across nodes. I would suggest rather than work on labeling the nodes automatically that you try to address why you need to identify the nodes in the first place.
I know this isn't creation time but, the following is pretty easy (labels follow pattern of key=value):
k label node minikube gpu.nvidia.com/model=Quadro_RTX_4000 node.coreweave.cloud/cpu=intel-xeon-v2
node/minikube labeled