Why using pods and not directly containers in an OpenShift V3 environment - kubernetes

Kubernetes is an orchestration tool for the management of containers.
Kubernetes creates pods which are containing containers, instead of managing containers directly.
I read this about pods
I'm working with OpenShift V3 which is using pods. But in my apps, all demo's and all examples I see:
One pod contains one containers (it's possible to contain more and that could be an advantage of using pods). But in an OpenShift environment I don't see the advantage of this pods.
Can some explain me why OpenShift V3 is using kubernetes with pods and containers instead of an orchestration tool which is working with containers immediately (without pods).

There are many cases where our users want to run pods with multiple containers within OpenShift. A common use-case for running multiple containers is where a pod has a 'primary' container that does some job, and a 'side-car' container that does something like write logs to a logging agent.
The motivation for pods is twofold -- to make it easier to share resources between containers, and to enable deploying and replicating groups of containers that share resources. You can read more about them in the user-guide.
The reason we still use a Pod when only a single container is that containers do not have all the notions that are attached to pods. For example, pods have IP addresses. Containers do not -- they share the IP address associated with the pod's network namespace.
Hope that helps. Let me know if you'd like more clarification, or we can discuss on slack.

Related

Running other non-cluster containers on k8s node

I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.

Can you make a kubernetes container deployment conditional on whether a configmap variable is set?

If I have a k8s deployment file for a service with multiple containers like api and worker1, can I make it so that there is a configmap with a variable worker1_enabled, such that if my service is restarted, container worker1 only runs if worker1_enabled=true in the configmap?
The short answer is No.
According to k8s docs, Pods in a Kubernetes cluster are used in two main ways:
Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
Unless your application requires it, it is better to separate the worker and api containers into their own pod. So you may have one deployment for worker and one for api.
As for deploying worker when worker1_enabled=true, that can be done with helm. You have to create a chart such that when the value of worker1_enabled=true is set, worker is deployed.
Last note, a service in kubernetes is an abstract way to expose an application running on a set of Pods as a network service.

What's the difference between pod and container from container runtime's perspective?

Kubernetes documentation describes pod as a wrapper around one or more containers. containers running inside of a pod share a set of namespaces (e.g. network) which makes me think namespaces are nested (I kind doubt that). What is the wrapper here from container runtime's perspective?
Since containers are just processes constrained by namespaces, Cgroups e.g. Perhaps, pod is just the first container launched by Kubelet and the rest of containers are started and grouped by namespaces.
The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same localhost network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.
The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).
Pod is just a co-located group of container and an Kubernetes object.
Instead of deploying them separate you can do deploy a pod of containers.
Best practices is that you should not actually run multiple processes via single container and here is the place where pod idea comes to a place. So with running pods you are grouping containers together and orchestrate them as single object.
Containers in a pod runs the same Network namespace (ip address and port space) so you have to be careful no to have the same port space used by two processes.
This differs for example when it comes to filesystem, since the containers fs comes from the image fs. The file systems are isolated unless they will share one Volume.

Kubernetes: one pod, more containers on more nodes

somebody could please help me to create a yaml config file for Kubernetes in order to face a situation like: one pod with 3 containers (for example) and these containers have to be deployed on 3 nodes of a cluster (Google GCE).
|P| |Cont1| ----> |Node1|
|O| ---> |Cont2| ----> |Node2| <----> GCE cluster
|D| |Cont3| ----> |Node3|
Thanks
From Kuberenets Concepts,
Pods in a Kubernetes cluster can be used in two main ways: Pods that
run a single container. The “one-container-per-Pod” model is the most
common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container, and Kubernetes manages the Pods
rather than the containers directly. Pods that run multiple containers
that need to work together. A Pod might encapsulate an application
composed of multiple co-located containers that are tightly coupled
and need to share resources. These co-located containers might form a
single cohesive unit of service–one container serving files from a
shared volume to the public, while a separate “sidecar” container
refreshes or updates those files. The Pod wraps these containers and
storage resources together as a single manageable entity.
In short, most likely, you should place each container in a single Pod to truly benefit from the microservices architecture vs the monolithic architecture commonly deployed in VMs. However there are some cases where you want to consider co-locating containers. Namely, as described in this article (Patterns for Composite Containers) some of the composite containers applications are:
Sidecar containers
extend and enhance the "main" container
Ambassador containers
proxy a local connection to the world
Adapter containers
standardize and normalize output
Once you define and run the Deployments, the Scheduler will be responsible to select the most suitable placement for your Pods, unless you manually assign Nodes by defining Labels in Deployment's YAML (not recommended unless you know what you're doing).
You can assign multiple containers to a single pod. You can assign pods to a specific node-pool. But I am not sure whether is it possible to assign multiple containers to multiple nodes running in side a single pod.
What you can do here is to assign each container to different pods (3 containers --> 3 pods) and then assign each pod to a different node-pool by adding this code to your deployment's .yaml file.
nodeSelector:
nodeclass: pool1

Steps involved in creating a pod in kubernetes

How does Kubernetes create Pods?
I.e. what are the sequential steps involved in creating a Pod, is it implemented in Kubernetes?
Any code reference in Kubernetes repo would also be helpful.
A Pod is described in a definition file, and ran as a set of Docker containers on a given host which is part of the Kubernetes cluster, much like docker-compose does, but with several differences.
Precisely, a Pod always contains multiple Docker containers, even though, only the containers defined by the user are usually visible through the API: A Pod has one container that is a placeholder generated by the Kubernetes API, that will hold the IP for the Pod (so that when a Pod is restarted, it's actually the client containers that are restarted, but the placeholder container remains and keeps the same IP, unlike in straight Docker or docker-compose, where recreating a composition or container changes the IP.)
How Pods are scheduled, created, started, restarted if needed, re-scheduled etc... it a much longer story and very broad question.