I'm faced with a scenario where we think about using Kubernetes. But I'm not sure if this is the right tool for it:
We have multiple vehicles, each having a computer connected to our main server via cellular network. We want to deploy several applications on every vehicle, so the vehicles are our nodes. We do not need any scaling, every vehicle will have an identical set of deployed applications running in two pods. And if a vehicle's computer is shut down, we must not deploy the pods on another node. Although the set of applications are always the same, their configuration is different on each vehicle (node). For instance some vehicles have a camera and this camera can only be accessed if their serial number is provided to the application. Other vehicles have no camera at all.
The Problem:
Using DaemonSets we probably can achieve that all vehicles will have just these two pods with the same containers. But the individual configuration worries me. We thought to have environment variables on each vehicle's computer with the relevant configs. But env variables of the host system cannot be accessed inside the containers running in pods. Is there any possibility to provide a node-unique configuration to our deployments? Is Kubernetes the right tool to use here at all?
Sorry i wasn't able to understand the vehicle and all that may be due to i read single time
but i can help with this;
But env variables of the host system cannot be accessed inside the
containers running in pods. Is there any possibility to provide a
node-unique configuration to our deployments?
Yes, there are possibilities i am not sure how you are setting up environment at host or K8s node.
But there is Hostpath option you can use, you can mount your node path directory into the container directly. You can create a file including the env vars you want to pass to the app when creating the Kubernetes node, at a fix location, then create your pod to use the same mount path as hostpath.
If your node gets replaced during the scaling your new PODs or container won't get this file if you are adding the file env manually at the first time.
Keep this env file in user data (startup script) so when any node get created in the node pool it will spin up with file at default location.
Read more : https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
Add on :
If you want to use labels of a node in container : https://github.com/scottcrossen/kube-node-labels
Related
I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.
Let's say I deployed 2 pods to Kubernetes and they both have the same underlying image which includes some read-only file system.
By default, do the pods share this file system? Or each pod copies the file system and hence has a separate copy of it?
I would appreciate any answer and especially would love to see some documentation or resources I can read in order to delve deeper into this issue.
Thanks in advance!
In short, it depends on where the pods are running. If they are running on the same node, then yes, they share the same read-only copy of the image, and if on separate nodes, then they have their own read-only copy of the image. Keep reading if you are interesting in knowing more technical details of this.
Inside Kubernetes Pods
A pod can be viewed as a set of containers bound together. It is a construct provided by Kubernetes to be able to have certain benefits out of the box. We can understand your question better if we zoom into a single node that is part of a Kubernetes cluster.
This node will have a kubelet binary running on it, which will receive certain "instructions" from the api-server on running pods. These "instructions" will be passed onto the cri (Container Runtime Interface) running on your node (let's assume it is the docker-engine). And this cri will be responsible for actually running the needed containers and report back to the kubelet which will report back to the api-server ultimately reporting to the pod-controller that the pod containers are Running.
Now, the question becomes, do multiple pods share the same image? I said the answer is yes for pods on the same node and this is how it works.
Say you run the first pod, the docker daemon running on your k8s node pulls this image from the configured registry and stores it in the local cache of the node. It then starts a container using this image. Note that a container that runs, utilizes the image as simply a read-only file-system, and depending on the storage driver configured in docker, you can have a "writeable layer" on top of this read-only filesystem that is used to allow you to read/write on the file-system of your container. This writeable layer is temporary and vanishes when you delete the container.
When you run the second pod, the daemon finds that the image is already available locally, and simply creates the small writeable layer for your container, on top of an existing image from the cache and provides this as a "writeable file system" to your container. This speeds things up.
Now, in case of docker, these read-only layers of the image (as one 'file-system') are shared across all containers running on the host. This makes sense since there is no need to copy a read-only file system and sharing it with multiple containers is safe. And each container can maintain its uniqueness by storing its data in the thin writeable layer that it has.
References
For further reading, you can use:
Read about storage drivers in docker. It explains how multiple containers share the r/o layer of the image.
Read details about different storage driver types to see how this "thin writeable layer" is implemented in practice by docker.
Read about container runtimes in Kubernetes to understand that docker isn't the only supported runtime. There are others but more or less, the same will hold true for them as well, as it makes sense to cache images locally and re-use the read-only image file system for multiple containers.
Read more about the kubelet component of Kubernetes to understand how it can support multiple run-times and how it helps the pod-controller setup and manage containers.
And of course, finally you can find more details about pods here. This will make a lot more sense after you've read the material above.
Hope this helps!
Background: Have approx 50 nodes "behind" a namespace. Meaning that a given Pod in this namespace can land on any of those 50 nodes.
The task is to test if an outbound firewall rule (in a FW outside the cluster) has been implemented correctly. Therefore I would like to test a command on each potential node in the namespace which will tell me if I can reach my target from the given node. (using curl for such test but that is besides the point for my question)
I can create a small containerized app which will exit 0 on success. Then next step would be execute this on each potential node and harvest the result. How to do that?
(I don't have access to the nodes directly, only indirectly via Kubernetes/OpenShift. I only have access to the namespace-level, not the cluster-level.)
The underlying node firewall settings is NOT control by K8s network policies. To test network connectivity in a namespace you only need to run 1 pod in that namespace. To test firewall settings of the node you typically ssh into the node and execute command to test - while this is possible with K8s but that would require the pod to run with root privileged; which not applicable to you as you only has access to a single namespace.
Then next step would be execute this on each potential node and
harvest the result. How to do that?
As gohm'c answer you can not run Command on Nodes unless you have access to Worker nodes. You need to have SSH access to check the firewall on Nodes.
If you are planning to just run container app on specific types of nodes, or on all the Nodes you can follow below answer
You can create the deployment or you can use the Deamon set if want to run on each node.
Deployment could be useful if you are planning to run on specific nodes, you have to use in that case Node selector or Affinity.
Daemon set will deploy and run containers on all existing Nodes. So you can choose accordingly.
I am working on a client requirement that the worker nodes needs to have a specific time zone configured for their apps to run properly. We have tried things such as using the TZ environment and also mounting a volume on /etc/localtime that points to the right file in /usr/share/zoneinfo// - these work to some extent but it seems I will need to use daemonsets to modify the node configuration for some of the apps.
The concern I have is that the specific pod that needs to make this change on the nodes will have to be run with host privileges and leaving such pods running on all pods doesn't sound good. The documentation says that the pods on daemonsets must have the restart policy of always so I can't have them exit after making the changes too.
I believe I can address this specific concern with an init container that run with host privileges, make the appropriate changes on the node and exit. The other pods in the daemonset will run after the init container is run successfully and finally, all the other pods get scheduled on the nodes. I also believe this sequence works the same way when I add another nodes to the cluster.
Does that sound about right? Are there better approaches?
I am in the process of learning Kubernetes with a view to setting up a simple cluster with Citus DB and I'm having a little trouble with getting things going, so would be grateful for any help.
I have a docker image containing my base debian image configured for Citus for the project, and I want to set it up at this point with one master, that should mount a GCP master disk with a Postgres DB that I'll then distribute among the other containers, each mounted with a individual separate disk with empty tables (configured with the Citus extension) to hold what gets distributed to each. I'd like to automate this further at some point, but now I'm aiming for just a master container, and eight nodes. My plan is to create a deployment that opens port 5432 and 80 on each node, and I thought that I can create two pods, one to hold the master and one to hold the eight nodes. Ideally I'd want to mount all the disks and then run a post-mount script on the master that will find all the node containers (by IP or hostname??), add them as Citus nodes, then run create_distributed_table to distribute the data.
My confusion at present is about how to label all the individual nodes so they will keep their internal address or hostname and so in the case of one going down it will be replaced and resume with the data on the PD. I've read about ConfigMaps and setting hostname aliases but I'm still unclear about how to proceed. Is this possible, or is this the wrong way to approach this kind of setup?
You are looking for a StatefulSet. That lets you have a known number of pod replicas; with attached storage (PersistentVolumes); and consistent DNS names. In the pod spec I would launch only a single copy of the server and use the StatefulSet's replica count to control the number of "nodes" (also a Kubernetes term), if the replica is #0 then it's the master.