What do you call the application running in a pod in Kubernetes? - kubernetes

In a typical theoretical system, I would call the different applications that make up the system nodes. However this is confusing in a Kubernetes cluster system for two reasons:
"Node.js" is often shortened to "node", and not all of the nodes in my system are "Node.js" processes.
Kubernetes uses the word "node" to refer to physical components in the cluster.
So the question is, what terminology is used to describe the subject that you would run on a pod? Are they projects? Processes? Application nodes? Applications?
None of the above sound right to me.

Pods contain one (or rarer more) application(s):
A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled.
Source
In the Kubernetes universe, Nodes are the physical or virtual machines that your cluster is running on. Pods run on the Nodes. I suggest to avoid the term Node for applications.

Usually it would be a "service" (although it clashes with k8s service) or a "microservice"

Related

Running other non-cluster containers on k8s node

I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.

Kubernetes parallel computing

I want to know , Kubernetes has any parallel computing implementation ?
long time ago i used OpenHPC or OpenMosix for parallel computation cluster system .
Kubernetes can replace with this services ?
if your answer is NO , so What does the word cluster mean when you talk about kubernetes ?
Kubernetes and HPC / HTC are not yet integrated, but some attempts can be observed.
In Kubernetes, Containers and HPC article you can find some kind of comparison between HPC and Kubernetes with similarities and differences.
The main differences are the workload types they focus on. While HPC workload managers are focused on running distributed memory jobs and support high-throughput scenarios, Kubernetes is primarily built for orchestrating containerized microservice applications.
If you are eager to find more information, you can read some specialist books like Seamlessly Managing HPC Workloads Through Kubernetes.
Regarding second part:
if your answer is NO , so What does the word cluster mean when you talk about kubernetes ?
You can find many definitions in the internet, however one of the easiest to understand is in Redhat Documentation.
A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
At a minimum, a cluster contains a control plane and one or more compute machines, or nodes. The control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. Nodes actually run the applications and workloads.
The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a group of machines, be they physical or virtual, on premises or in the cloud. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster.
In addition, you can also find useful information in Official Kubernetes Documentation like What is Kubernetes? and Kubernetes Concepts.

In a Kubernetes cluster. Does the Master Node need always to run alone in a cluster node?

I am aware that it is possible to enable the master node to execute pods and that is my concern. Since the default configuration is do not allow the master to run pods. Should I change it? What is the reason for the default configuration as it is?
If the change can be performed in some situations. I would like to ask if my cluster in one of these. It has only three nodes with exactly the same hardware and possibly more nodes are not going to be added in the foreseeable future. In my opinion, as I have three equal nodes, it will be a waste of resources to use 1/3 of my cluster computational power to run the kubernetes master. Am I right?
[Edit1]
I have found the following reason in Kubernets documentation.
It is, the security, the only reason?
Technically, it doesn't need to run on a dedicated node. But for your Kubernetes cluster to run, you need your masters to work properly. And one of the ways how to ensure it can be secure, stable and perform well is to use separate node which runs only the master components and not regular pod. If you share the node with different pods, there could be several ways how it can impact the master. For example:
The other pods will impact the perforamnce of the masters (network or disk latencies, CPU cache etc.)
They migth be a security risk (if someone manages to hack from some other pod into the master node)
A badly written application can cause stability issues to the node
While it can be seen as wasting resources, you can also see it as a price to pay for the stability of your master / Kubernetes cluster. However, it doesn't have to be waste of 1/3 of resources. Depending on how you deploy your Kubernetes cluster you can use different hosts for different nodes. So for example you can use small host for the master and bigger nodes for the workers.
No, this is not required, but strongly recommended. Security is one aspect, but performance is another. Etcd is usually run on those control plane nodes and it tends to chug if it runs out of IOPS. So a rogue pod running application code could destabilize the control plane, which then reduces your ability to fix the problem.
When running small clusters for testing purposes, it is common to run everything (control plane and workloads) on a single node specifically to save money/complexity.

Avoid scheduling multiple pod instances on same physical host running VMs as nodes

This is not a current practical issue, but a theoretical question. Assume K8S worker nodes N1,N2,N3..Nn are actually virtual machines on physical hosts H1,H2,H3 such that N1 and N2 are currently on H1. When I want to schedule 5 instances of my pod P1, is there any awareness in K8S about the underlying physical host H1? Or is it possible that all 5 instances could be scheduled on N1 and N2 - resulting in all 5(P1) been on H1?
First of all, let's say "no". In case of "bare metal" (or VMs above bare metal) The Kubernetes "knows" only about his nodes (etcd, master, worker) and doesn't know anything about physical host, where VMs are located. But, you can label your nodes with some key/value pairs, which will ensure that this VM belongs to a physical host.
Secondly, let's say "yes". How any deployments of pods will be scheduled to nodes - is a k8s scheduler's task, which is a part of k8s master's control plane. Default scheduler has got an algorithm how to detect the most appropriate k8s node for deploying. So, theoretically H1 could host all 5 instances of you application.
Good news that it's really unlikely. Moreover, the Kubernetes provides you an ability to create your own custom scheduler with your own logic for scheduling and use it only for specific deployments. That's why it's hard to believe there will be case for scheduling you could not be able to resolve.
P.S: actually, k8s scheduling - is a too large subject to describe in a nutshell. Sufficient time should be allocated for the examination of your cases. Try to start from reading "kubernetes advance scheduling":
https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes/
Sure, it'll will help you to go forward. Good luck!
There is build in mechanism in kubernetes, called topologySpreadConstraints.
Documentation - https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/

Can kubernetes schedule multiple unrelated pods on one host?

If I have 10 different services, each of which are independent from each other and run from their own container, can I get kubernetes to run all of those services on, say, 1 host?
This is unclear in the kubernetes documentation. It states that you can force it to schedule containers from the same pod onto one host, using a "multi-container pod", but it doesn't seem to approach the subject of whether you can have multiple pods running on one host.
In fact kubernetes will do exactly what you want by default. It is capable of running dozens if not hundreds of containers on a single host (depending on its specs).
If you want very advanced control over scheduling pods, there is an alpha feature for that, which introduces concept of node/pod (anti)affinities. But I would say it is a rather advanced k8s topic at the moment, so you are probably good with what is in stable/beta for most use cases.
Honorable mention: there is a nasty trick that allows you to control when pods can not be collocated on the same node. An that is when they both declare same hostPort in their ports section. It can be usefull for some cases, but be aware it affects ie. how rolling deployments happen in some situations.
You can use node selectors and assign the same node for each of the pod to the same node / host
http://kubernetes.io/docs/user-guide/node-selection/
Having said that, the whole point to Kubernetes is to manage a cluster where you can deploy apps / pods across them.