Why am I not seeing the worker node in my cluster? - kubernetes

I'm running a cluster with kind - one worker node.
However when I do kubectl get nodes I can't see the node, but instead I see 'kind control plane' - which makes no sense to me, control plane is a node??
The worker node must be running, because I can do kubectl exec --stdin --tty <name of the pod> /bin/sh and see inside of the container that's running my app.
Is this some weird WSL2 interaction? Or I'm simply doing something wrong?

control-plane is just a name. If you just run kind create cluster, its default is to create a single-node cluster with the name control-plane. From your description, everything is working properly.
One of kind's core features is the ability to run a "multi-node" cluster, but all locally in containers. If you want to test your application's behavior if, for example, you drain its pods from a node, you can run a kind cluster with one control-plane node (running etcd, the API server, and other core Kubernetes processes) and three worker nodes; let the application start up, then kubectl drain worker-1 and watch what happens. The documentation also notes that this is useful if you're developing on Kubernetes itself and need a "multi-node" control plan to test HA support.

Related

What is minikube config specifying?

According to the minikube handbook the configuration commands are used to "Configure your cluster". But what does that mean?
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
Are these the values it will reserve on the host machine in preparation for use?
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
Thanks for the help in advance.
Answering the question:
If I set cpus and memory then are these the max values the cluster as a whole can ever consume?
In short. It will be a limit for the whole resource (either a VM, a container, etc. depending on a --driver used). It will be used for the underlying OS, Kubernetes components and the workload that you are trying to run on it.
Are these the values it will reserve on the host machine in preparation for use?
I'd reckon this would be related to the --driver you are using and how its handling the resources. I personally doubt it's reserving the 100% of CPU and memory you've passed in the $ minikube start and I'm more inclined to the idea that it uses how much it needs during specific operations.
Are these the values that are handed to the control plane container/VM and now I have to specify more resources when making a worker node?
By default, when you create a minikube instance with: $ minikube start ... you will create a single node cluster capable of being a control-plane node and a worker node simultaneously. You will be able to run your workloads (like an nginx-deployment without adding additional node).
You can add a node to your minikube ecosystem with just: $ minikube node add. This will make another node marked as a worker (with no control-plane components). You can read more about it here:
Minikube.sigs.k8s.io: Docs: Tutorials: Multi node
What if I want to add another machine (VM or bare metal) and add its resources in the form of a worker node to the cluster? From the looks of it I would have to delete that cluster, change the configuration, then start a new cluster with the new configuration. That doesn't seem scalable.
As said previously, you don't need to delete the minikube cluster to add another node. You can run $ minikube node add to add a node on a minikube host. There are also options to delete/stop/start nodes.
Personally speaking if the workload that you are trying to run requires multiple nodes, I would try to consider other Kubernetes cluster built on top/with:
Kubeadm
Kubespray
Microk8s
This would allow you to have more flexibility on where you want to create your Kubernetes cluster (as far as I know, minikube works within a single host (like your laptop for example)).
A side note!
There is an answer (written more than 2 years ago) which shows the way to add a Kubernetes cluster node to a minikube here :
Stackoverflow.com: Answer: How do I get the minikube nodes in a local cluster
Additional resources:
Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Create cluster kubeadm
Github.com: Kubernetes sigs: Kubespray
Microk8s.io

how to extract configurations and other information from a dead kubernetes cluster?

These days I'm playing with Kubernetes and managed to install a single-node cluster (on a single computer).
I see it offers many tools to add / modify / remove configuration parts (services, pods, deployments, ...) but I was wondering what could one do if a node doesn't start anymore - i.e. the machine is fine but the configuration is broken.
Are there tools that can help in that situation? I'm talking about services, deployments, etc.
Kubeadm seems to only provide node configuration, while kubectl requires a running node to retrieve informations.
kubectl talks to the cluster through the API and for this to work we need to have the kube-apiserver running.
$ kubectl get pods -A -l component=kube-apiserver
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-apiserver-yaki-test-1 1/1 Running 0 3d18h
There are a few ways to access you cluster but all of them requires you to have you API server running.
The best approach would be to fix the what is causing the cluster to fail. Here you can read about ways to troubleshoot your cluster.

Daemonset with privilged pods

I am working on a client requirement that the worker nodes needs to have a specific time zone configured for their apps to run properly. We have tried things such as using the TZ environment and also mounting a volume on /etc/localtime that points to the right file in /usr/share/zoneinfo// - these work to some extent but it seems I will need to use daemonsets to modify the node configuration for some of the apps.
The concern I have is that the specific pod that needs to make this change on the nodes will have to be run with host privileges and leaving such pods running on all pods doesn't sound good. The documentation says that the pods on daemonsets must have the restart policy of always so I can't have them exit after making the changes too.
I believe I can address this specific concern with an init container that run with host privileges, make the appropriate changes on the node and exit. The other pods in the daemonset will run after the init container is run successfully and finally, all the other pods get scheduled on the nodes. I also believe this sequence works the same way when I add another nodes to the cluster.
Does that sound about right? Are there better approaches?

Kubernetes cluster recovery after linux host reboot

We are still in a design phase to move away from monolithic architecture towards Microservices with Docker and Kubernetes. We did some basic research on Docker and Kubernetes and got some understanding. We still have couple of open question considering we will be creating K8s cluster with multiple Linux hosts (due to some reason we can't think about Cloud right now) .
Consider a scenario where we have K8s Cluster spanning over multiple linux hosts (5+).
1) If one of the linux worker node crashes and once we bring it back, does enabling kubelet as part of systemctl in advance will be sufficient to bring up required K8s jobs so that it be detected by master again?
2) I believe once worker node is crashed (X pods), after the pod eviction timeout master will reschedule those X pods into some other healthy node(s). Once the node is UP it won't do any deployment of X pods as master already scheduled to other node but will be ready to accept new requests from Master.
Is this correct ?
Yes, should be the default behavior, check your Cluster deployment tool.
Yes, Kubernetes handles these things automatically for Deployments. For StatefulSets (with local volumes) and DaemonSets things can be node specific and Kubernetes will wait for the node to come back.
Better to create a test environment and see/test the failure scenarios

How to manifest a container with /dev/console from a pod definition with Kubernetes?

We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.