I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.
Related
Kubernetes documentation describes pod as a wrapper around one or more containers. containers running inside of a pod share a set of namespaces (e.g. network) which makes me think namespaces are nested (I kind doubt that). What is the wrapper here from container runtime's perspective?
Since containers are just processes constrained by namespaces, Cgroups e.g. Perhaps, pod is just the first container launched by Kubelet and the rest of containers are started and grouped by namespaces.
The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same localhost network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.
The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).
Pod is just a co-located group of container and an Kubernetes object.
Instead of deploying them separate you can do deploy a pod of containers.
Best practices is that you should not actually run multiple processes via single container and here is the place where pod idea comes to a place. So with running pods you are grouping containers together and orchestrate them as single object.
Containers in a pod runs the same Network namespace (ip address and port space) so you have to be careful no to have the same port space used by two processes.
This differs for example when it comes to filesystem, since the containers fs comes from the image fs. The file systems are isolated unless they will share one Volume.
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
I am aware of solutions such as minikube and microk8s but they are not expandable. I am trying k3s at the moment exactly because it is offering this feature but I have some problems with storage and other stuff that I am working on them.
Now my questions:
What other solution for this exists?
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
You can use kubeadm to setup a single node "cluster". Then you can use the join command to add more nodes
You can expand k3s cluster via k3sup join.Here is guide.
Key Kubernetes services such as kube-apiserver, kube-scheduler should be available and running smoothly at all times on master nodes. Therefore, it is essential to have dedicated resources for the master nodes, and avoid having other non-critical workloads interfere with the functioning of the master services
What are the disadvantages if I untaint the master node and run everything there (for a long period and not just for test)?
Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you.
Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).
I am searching for a solution that enables me to set up a single node K8s cluster and if I needed I add nodes to it later.
To the best of my knowledge, there is no such thing as single node production ready k8s cluster.
For something small and simple you can check Rancher.
What other solution for this exists?
kubeadm allows you to install everything on a single node. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint.
Another solution you may be interested in is the Kubespray.
Some "honorable mentions" are:
Charmed Kubernetes by Canonical allows you to do everything on one node; however it should be quite a big node, so may be not the case here (but still worth mentioning).
If you don't really require all the k8s power (with only one small node), then Nomad could be an alternative.
Let me know if that helps.
I have played around a little bit with docker and kubernetes. Need some advice here on - Is it a good idea to have one POD on a VM with all these deployed in multiple (hybrid) containers?
This is our POC plan:
Customers to access (nginx reverse proxy) with a public API endpoint. eg., abc.xyz.com or def.xyz.com
List of containers that we need
Identity server Connected to SQL server
Our API server with Hangfire. Connected to SQL server
The API server that connects to Redis Server
The Redis in turn has 3 agents with Hangfire load-balanced (future scalable)
Setup 1 or 2 VMs?
Combination of Windows and Linux Containers, is that advisable?
How many Pods per VM? How many containers per Pod?
Should we attach volumes for DB?
Thank you for your help
Cluster size can be different depending on the Kubernetes platform you want to use. For managed solutions like GKE/EKS/AKS you don't need to create a master node but you have less control over our cluster and you can't use latest Kubernetes version.
It is safer to have at least 2 worker nodes. (More is better). In case of node failure, pods will be rescheduled on another healthy node.
I'd say linux containers are more lightweight and have less overhead, but it's up to you to decide what to use.
Number of pods per VM is defined during scheduling process by the kube-scheduler and depends on the pods' requested resources and amount of resources available on cluster nodes.
All data inside running containers in a Pod are lost after pod restart/deletion. You can import/restore DB content during pod startup using Init Containers(or DB replication) or configure volumes to save data between pod restarts.
You can easily decide which container you need to put in the same Pod if you look at your application set from the perspective of scaling, updating and availability.
If you can benefit from scaling, updating application parts independently and having several replicas of some crucial parts of your application, it's better to put them in the separate Deployments. If it's required for the application parts to run always on the same node and if it's fine to restart them all at once, you can put them in one Pod.
If I have 10 different services, each of which are independent from each other and run from their own container, can I get kubernetes to run all of those services on, say, 1 host?
This is unclear in the kubernetes documentation. It states that you can force it to schedule containers from the same pod onto one host, using a "multi-container pod", but it doesn't seem to approach the subject of whether you can have multiple pods running on one host.
In fact kubernetes will do exactly what you want by default. It is capable of running dozens if not hundreds of containers on a single host (depending on its specs).
If you want very advanced control over scheduling pods, there is an alpha feature for that, which introduces concept of node/pod (anti)affinities. But I would say it is a rather advanced k8s topic at the moment, so you are probably good with what is in stable/beta for most use cases.
Honorable mention: there is a nasty trick that allows you to control when pods can not be collocated on the same node. An that is when they both declare same hostPort in their ports section. It can be usefull for some cases, but be aware it affects ie. how rolling deployments happen in some situations.
You can use node selectors and assign the same node for each of the pod to the same node / host
http://kubernetes.io/docs/user-guide/node-selection/
Having said that, the whole point to Kubernetes is to manage a cluster where you can deploy apps / pods across them.
This is a general question about GKE compared to GCE. If one is running a lightweight service on a single small GCE VM, is it a reasonable thing to do to try running that same service from a single GKE container on the same size instance? Or does the overhead of cluster management make this unfeasible?
Specifics: I'm serving a low-traffic website from a tiny (f1-micro) GCE VM. For various reasons I thought I'd try moving it to serve from an apache/nginx container, with the same hardware underneath. In practice though, I find that GKE won't even let you create a cluster of f1-micro instances unless it has at least 3 nodes - the release notes say this is so there will be enough memory to manage pods.
I'd supposed that the same service would take up similar resources whether in a VM or a container, but the GKE's 3-node restriction makes it sound like simply managing the cluster eats more memory than serving my site does in the first place. Is that the case, or is the restriction meant for much heaver services than mine? (For reference, you can actually create a 3-node cluster of f1-micro instances and then change the size to 1 node, and it seems to run normally, but I haven't tried actually running a service this way.)
Thanks!
GKE enables logging and monitoring by default, which runs Fluentd and Heapster pods in your cluster. These eat up a good chunk of memory. Even if you disable logging/monitoring, you still have to run Docker, Kubelet, and the DNS pod. That chews through the f1-micro's 600MB pretty quickly.
I'd suggest a 1 node g1-small cluster over a 3 node (or 1 node) f1-micro. The per-node cluster-management overhead is smaller relatively, so your service would still be able to run in the same (or larger) footprint. But, if the resize-to-1 workaround is working for you, it seems fine to just roll with that.