How to get a shell inside a POD instead of container running inside the POD - kubernetes

Is it possible to get a shell inside a running POD instead of container running inside the POD ?
kubectl exec or rsh open a shell inside a running container not the POD.

No you can not. A pod is kubernetes concept and does not provide such ability. A pod can consist of one or more containers, and the pod describes what to be run and how in the containers.

Nope. what you are trying to do is not possible. Pod is just to manage containers. and Pod must contain at least one container.

My predecessors are right, you cannot shell inside a running POD. I suggest getting familiar with the official documentation regarding Pods in order to get a better understanding of this particular concept:
Pods are the smallest deployable units of computing that you can
create and manage in Kubernetes.
A Pod (as in a pod of whales or pea pod) is a group of one or more
containers, with shared storage/network resources, and a specification
for how to run the containers.
You can find more info regarding using or working with them as well as other details like resource sharing and communication.

Related

Run different replica count for different containers within same pod

I have a pod with 2 closely related services running as containers. I am running as a StatefulSet and have set replicas as 5. So 5 pods are created with each pod having both the containers.
Now My requirement is to have the second container run only in 1 pod. I don't want it to run in 5 pods. But my first service should still run in 5 pods.
Is there a way to define this in the deployment yaml file for Kubernetes? Please help.
a "pod" is the smallest entity that is managed by kubernetes, and one pod can contain multiple containers, but you can only specify one pod per deployment/statefulset, so there is no way to accomplish what you are asking for with only one deployment/statefulset.
however, if you want to be able to scale them independently of each other, you can create two deployments/statefulsets to accomplish this. this is imo the only way to do so.
see https://kubernetes.io/docs/concepts/workloads/pods/ for more information.
Containers are like processes,
Pods are like VMs,
and Statefulsets/Deployments are like the supervisor program controlling the VM's horizontal scaling.
The only way for your scenario is to define the second container in a new deployment's pod template, and set its replicas to 1, while keeping the old statefulset with 5 replicas.
Here are some definitions from documentations (links in the references):
Containers are technologies that allow you to package and isolate applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality. [1]
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod's resources. [2]
A deployment provides declarative updates for Pods and ReplicaSets. [3]
StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. [4]
Based on all that information - this is impossible to match your requirements using one deployment/Statefulset.
I advise you to try the idea #David Maze mentioned in a comment under your question:
If it's possible to have 4 of the main application container not having a matching same-pod support container, then they're not so "closely related" they need to run in the same pod. Run the second container in a separate Deployment/StatefulSet (also with a separate Service) and you can independently control the replica counts.
References:
Documentation about Containers
Documentation about Pods
Documentation about Deployments
Documentation about StatefulSet

How to know how long it takes to create a pod in Kubernetes ? Is there any command please?

I have my Kubernetes cluster and I need to know how long it takes to create a pod? Is there any Kubernetes command show me that ?
Thanks in advance
What you are asking for is not existing.
I think you should first understand the Pod Overview.
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
While you are deploying a POD it's going through phases
Pending
The Pod has been accepted by the Kubernetes system, but one or more of the Container images has not been created. This includes time before being scheduled as well as time spent downloading images over the network, which could take a while.
Running
The Pod has been bound to a node, and all of the Containers have been created. At least one Container is still running, or is in the process of starting or restarting.
Succeeded
All Containers in the Pod have terminated in success, and will not be restarted.
Failed
All Containers in the Pod have terminated, and at least one Container has terminated in failure. That is, the Container either exited with non-zero status or was terminated by the system.
Unknown
For some reason the state of the Pod could not be obtained, typically due to an error in communicating with the
host of the Pod.
As for Pod Conditions it have a type which can have following values:
PodScheduled: the Pod has been scheduled to a node;
Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services;
Initialized: all init containers have started successfully;
Unschedulable: the scheduler cannot schedule the Pod right now, for example due to lacking of resources or other constraints;
ContainersReady: all containers in the Pod are ready.
Please refer to the documentation regarding Pod Lifecycle for more information.
When you are deploying your POD, you have to consider how many containers will be running in it.
The image will have to be downloaded, depending on the size it might take longer. Also default pull policy is IfNotPresent, which means that Kubernetes will skip the image pull if it already exists.
You can find more about Updating Images can be found here.
You also need to consider how much resources your Master and Node has.

Steps involved in creating a pod in kubernetes

How does Kubernetes create Pods?
I.e. what are the sequential steps involved in creating a Pod, is it implemented in Kubernetes?
Any code reference in Kubernetes repo would also be helpful.
A Pod is described in a definition file, and ran as a set of Docker containers on a given host which is part of the Kubernetes cluster, much like docker-compose does, but with several differences.
Precisely, a Pod always contains multiple Docker containers, even though, only the containers defined by the user are usually visible through the API: A Pod has one container that is a placeholder generated by the Kubernetes API, that will hold the IP for the Pod (so that when a Pod is restarted, it's actually the client containers that are restarted, but the placeholder container remains and keeps the same IP, unlike in straight Docker or docker-compose, where recreating a composition or container changes the IP.)
How Pods are scheduled, created, started, restarted if needed, re-scheduled etc... it a much longer story and very broad question.

Why using pods and not directly containers in an OpenShift V3 environment

Kubernetes is an orchestration tool for the management of containers.
Kubernetes creates pods which are containing containers, instead of managing containers directly.
I read this about pods
I'm working with OpenShift V3 which is using pods. But in my apps, all demo's and all examples I see:
One pod contains one containers (it's possible to contain more and that could be an advantage of using pods). But in an OpenShift environment I don't see the advantage of this pods.
Can some explain me why OpenShift V3 is using kubernetes with pods and containers instead of an orchestration tool which is working with containers immediately (without pods).
There are many cases where our users want to run pods with multiple containers within OpenShift. A common use-case for running multiple containers is where a pod has a 'primary' container that does some job, and a 'side-car' container that does something like write logs to a logging agent.
The motivation for pods is twofold -- to make it easier to share resources between containers, and to enable deploying and replicating groups of containers that share resources. You can read more about them in the user-guide.
The reason we still use a Pod when only a single container is that containers do not have all the notions that are attached to pods. For example, pods have IP addresses. Containers do not -- they share the IP address associated with the pod's network namespace.
Hope that helps. Let me know if you'd like more clarification, or we can discuss on slack.

How to manifest a container with /dev/console from a pod definition with Kubernetes?

We use systemd in our container to manage the processes running in the container.
We configure journald in the container so, that it sends all logs to /dev/console.
In order to have /dev/console in a container we have to use "-t" option of Docker when we deploy the container.
I would like to ask, what the equivalent way is with Kubernetes. Where can we state in the pod manifest that we need /dev/console in the containers?
I understand, that with kubectl it is possible (with "--tty" or "-t"). But we do not want to start containers with kubectl.
We do support TTY containers in kubernetes v1.1, but not a tty without input. If you want to see that, I think a GitHub issue would be appropriate.
I agree with Spencer that running systemd in a container is not "best practice" but there are valid reasons to do it, not the least of which is "that's what we know how to do". People's usage of container will evolve over time.
The kubectl --tty option only applies to kubectl exec --tty, which is for running a process inside a container that has already been deployed in a pod. So it would not help you deploy pods with /dev/console defined.
As far as I can see there's no way in current Kubernetes to cause pods to be launched with containers having /dev/console defined.
I would go further and say that the way these containers are defined, with multiple processes managed by systemd and logged by journald, is outside the usual use cases for Kubernetes. Kubernetes has value where the containers are simple, individual processes, running as daemons. Kubernetes manages the launching of multiple distict containers per pod, and/or multiple pods as replicas, including monitoring, logging, restart, etc. Having a separate launch/init and log scheme inside each container doesn't fit the usual Kubernetes use case.