Background: Have approx 50 nodes "behind" a namespace. Meaning that a given Pod in this namespace can land on any of those 50 nodes.
The task is to test if an outbound firewall rule (in a FW outside the cluster) has been implemented correctly. Therefore I would like to test a command on each potential node in the namespace which will tell me if I can reach my target from the given node. (using curl for such test but that is besides the point for my question)
I can create a small containerized app which will exit 0 on success. Then next step would be execute this on each potential node and harvest the result. How to do that?
(I don't have access to the nodes directly, only indirectly via Kubernetes/OpenShift. I only have access to the namespace-level, not the cluster-level.)
The underlying node firewall settings is NOT control by K8s network policies. To test network connectivity in a namespace you only need to run 1 pod in that namespace. To test firewall settings of the node you typically ssh into the node and execute command to test - while this is possible with K8s but that would require the pod to run with root privileged; which not applicable to you as you only has access to a single namespace.
Then next step would be execute this on each potential node and
harvest the result. How to do that?
As gohm'c answer you can not run Command on Nodes unless you have access to Worker nodes. You need to have SSH access to check the firewall on Nodes.
If you are planning to just run container app on specific types of nodes, or on all the Nodes you can follow below answer
You can create the deployment or you can use the Deamon set if want to run on each node.
Deployment could be useful if you are planning to run on specific nodes, you have to use in that case Node selector or Affinity.
Daemon set will deploy and run containers on all existing Nodes. So you can choose accordingly.
Related
I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.
I'm faced with a scenario where we think about using Kubernetes. But I'm not sure if this is the right tool for it:
We have multiple vehicles, each having a computer connected to our main server via cellular network. We want to deploy several applications on every vehicle, so the vehicles are our nodes. We do not need any scaling, every vehicle will have an identical set of deployed applications running in two pods. And if a vehicle's computer is shut down, we must not deploy the pods on another node. Although the set of applications are always the same, their configuration is different on each vehicle (node). For instance some vehicles have a camera and this camera can only be accessed if their serial number is provided to the application. Other vehicles have no camera at all.
The Problem:
Using DaemonSets we probably can achieve that all vehicles will have just these two pods with the same containers. But the individual configuration worries me. We thought to have environment variables on each vehicle's computer with the relevant configs. But env variables of the host system cannot be accessed inside the containers running in pods. Is there any possibility to provide a node-unique configuration to our deployments? Is Kubernetes the right tool to use here at all?
Sorry i wasn't able to understand the vehicle and all that may be due to i read single time
but i can help with this;
But env variables of the host system cannot be accessed inside the
containers running in pods. Is there any possibility to provide a
node-unique configuration to our deployments?
Yes, there are possibilities i am not sure how you are setting up environment at host or K8s node.
But there is Hostpath option you can use, you can mount your node path directory into the container directly. You can create a file including the env vars you want to pass to the app when creating the Kubernetes node, at a fix location, then create your pod to use the same mount path as hostpath.
If your node gets replaced during the scaling your new PODs or container won't get this file if you are adding the file env manually at the first time.
Keep this env file in user data (startup script) so when any node get created in the node pool it will spin up with file at default location.
Read more : https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Add on :
If you want to use labels of a node in container : https://github.com/scottcrossen/kube-node-labels
I have a node mistakenly registered on a cluster B while it is actually serving for cluster A.
Here 'registered on a cluster B' means I can see the node from kubectl get node from cluster B.
I want to deregister this node from cluster B, but keep the node intact.
I know regular process to delete a node is:
kubectl drain node xxx
kubectl delete node xxx
# on node
kubeadm reset
But I do not want pods on the node from cluster A to be deleted or transfered. And I want to make sure the node would not self-register to cluster B afterwards.
To be clear, let's say, cluster A has Pod A on the node, cluster B has Pod B on the node as well, I want to delete node from cluster B, but keep Pod A intact. (By the way, can I see Pod A from cluster B?)
Thank you in advance!
To deregister the node without removing any pod you run below command
kubectl delete node nodename
After this is done the node will not appear in kubectl get nodes
For the node to not self register again stop the kubelet process on that node by logging into that node and using below command.
systemctl stop kubelet
As this case has been already clarified I decided to publish a Community Wiki answer based on the following comment:
#mario nvm, I thought different clusters in one node affect each
other, actually they do not, they just share container runtime which
is more like 'read-only', and they have different kubelets of
themselves listening on different port. – Li Ziyan Aug 17 at 5:29
to make it clear also for other users what was actually the issue here and how it has been solved or simply clarified.
So if you design your infrastructure in such a way that you use one physical (or virtual) machine as Node for more than one kubernetes clusters (which I believe is not very common case) the infrastructure looks as follows:
Components that are shared:
physical (or virtual) node
common container runtime environment (e.g. docker)
Components that are separate:
two separate kubelets. Although they are running on the same physical/virtual node they are configured to listen on different ports and are registered within two master Nodes (or more specifically two different kube-apiservers being part of two different kubernetes control planes)
two logically separate, independent kubernetes Nodes which, although they are configured on the same physical node/host, are logically completely separate kubernetes Nodes, being part of two completely different kubernetes clusters that don't interfere with each other in any way.
I hope it helps to clarify possible confusion about this question and maybe help someone in case they have similar doubts.
I am working on a client requirement that the worker nodes needs to have a specific time zone configured for their apps to run properly. We have tried things such as using the TZ environment and also mounting a volume on /etc/localtime that points to the right file in /usr/share/zoneinfo// - these work to some extent but it seems I will need to use daemonsets to modify the node configuration for some of the apps.
The concern I have is that the specific pod that needs to make this change on the nodes will have to be run with host privileges and leaving such pods running on all pods doesn't sound good. The documentation says that the pods on daemonsets must have the restart policy of always so I can't have them exit after making the changes too.
I believe I can address this specific concern with an init container that run with host privileges, make the appropriate changes on the node and exit. The other pods in the daemonset will run after the init container is run successfully and finally, all the other pods get scheduled on the nodes. I also believe this sequence works the same way when I add another nodes to the cluster.
Does that sound about right? Are there better approaches?
I am working on a POC, and i find out some strange behavior after setting up my kubernetes cluster
In fact, i am working on a topology of one master and two minions.
When i tried to make up 2 pods into each minion and expose a service for them, it turned out that when i try to request the service from the master, nothing is returned (any response from 2 pods) and when i try to request the service from a minion, only the pod deployed in that minion respond but the other no.
This can heavily depend on how your cluster is provisioned.
For starters, you need to validate how networking is set up and if it works as kubernetes expects. Said short, if you launch two pods (on separate nodes), they should get IPs from their dedicated per node ranges, and be able to route that between nodes. You can use some small(ish) base image (alpine/debian/ubuntu etc.), with something like sleep 1d , exec into them interactively with bash and simply ping one from the other. If it does not work, your network setup is broken.
Make sure you test between pods, not directly from node host OS. In some configurations node is unable to access service IPs due to routing concerns, but pod-to-pod works fine (seen this in some flannel configurations)
Also, your networking is probably provided by some overlay network solution like flannel, weave, calico etc. so check their respective logs for signs of problems.