How to login/enter in kubernetes pod - kubernetes

I have kubernetes pods running as shown in command "kubectl get all -A" :
and same pods are shown in command "kubectl get pod -A" :
I want to enter/login to any of these pod (all are in Running state). How can I do that please let me know the command?

Kubernetes Pods are not Virtual Machines, so not something you typically can "log in" to.
But you might be able to execute a command in a container. e.g. with:
kubectl exec <pod-name> -- <command>
Note that your container need to contain the binary for <command>, otherwise this will fail.
See also Getting a shell to a container.

In addition to Jonas' answer above;
If you have more than one namespace, you need to specify the namespace your pod is currently using i.e kubectl exec -n <name space here> <pod-name> -it -- /bin/sh
After successfully accessing your pod, you can go ahead and navigate through your container.

Related

How can I enter a Kubernetes managed container faster?

Currently if I'm about to inspect my container, I have to do three steps:
kubectl get all -n {NameSpace}
kubectl describe {Podname from step 1} -n {NameSpace}
Find the Node Host and the container ID (My eyes are complaning!)
Switch to the host and execute "docker exec -ti -u root {Container ID} bash"
I am so mad about it right now. Wish somebody could offer some help to me and those who may share the same issue.
Pods are the smallest deployable units of computing that you can
create and manage in Kubernetes.
So, if you want to "enter" a container, you just need to "exec" into the pod in a particular namespace. Kubernetes will get you the shell/command for that pod.
kubectl -n somenamespace exec -it podname -- bash
There is no need to mention the node here as Kubernetes internally knows on which node the pod is scheduled.
If a Pod has more than one container, use --container or -c to specify
a container in the kubectl exec command. For example, suppose you have
a Pod named my-pod, and the Pod has two containers named main-app and
helper-app. The following command would open a shell to the main-app
container.
kubectl exec -it my-pod -c main-app -- /bin/bash
https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Check working of an service in kubernetes

I create a pod to test my service in kubernetes. But i didn't get anythings. Here is my command
kubectl run --generator=run-pod/v1 nginx-resolver --image=nginx
kubectl expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup nginx-resolver-service
Please help me explain why. Thanks
Run the following command and get things what you are thinking wrong about this cmd.
$ kubectl run --help
Create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created container(s).
Examples:
# Start a single instance of nginx.
kubectl run nginx --image=nginx
# Start a single instance of hazelcast and let the container expose port 5701 .
kubectl run hazelcast --image=hazelcast --port=5701
...
So, kubectl run cmd creates a deployment or a job
If it is a deployment, it creates (a) first, a replicaset, (b) then Pod(s).
If it is a job, it creates a Pod.
But you are trying to expose a Pod which name is not the correct one. You can see the name of the Pod that is/are created by the cmd kubectl run.
$ kubectl get pods --namespace=<namespace> | grep "nginx-resolver"
$ kubectl get pods --namespace=<namespace> | grep "test-nslookup"
Then use those names to expose Pod.
You can optionally expose your Deployment. To do so, see the help of $ kubectl expose deployment --help. Run:
$ kubectl expose deployment --help
Expose a resource as a new Kubernetes service.
Looks up a deployment, service, replica set, replication controller or pod by name and uses the selector for that
resource as the selector for a new service on the specified port. A deployment or replica set will be exposed as a
service only if its selector is convertible to a selector that service supports, i.e. when the selector contains only
the matchLabels component. Note that if no port is specified via --port and the exposed resource has multiple ports, all
will be re-used by the new service. Also if no labels are specified, the new service will re-use the labels from the
resource it exposes.
Possible resources include (case insensitive):
pod (po), service (svc), replicationcontroller (rc), deployment (deploy), replicaset (rs)
Examples:
...
# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000.
kubectl expose deployment nginx --port=80 --target-port=8000
...
If you want to see the log interactively, you need to set the --restart option of your test-nslookup pod to Never or OnFailure. Otherwise, kubernetes will just restart your pod indefinitely and you won't see anything.
So your last command should be :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --restart=OnFailure -- nslookup nginx-resolver-service
Why ?
Probably because of this issue.
It seems to have a delay of 5s before kubectl run actually print something.
So in order to do it without changing the restart option, you'll need to change your command like this (beware of the sleep 7, so you'll have to wait 7seconds before seeing the logs) :
kubectl run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 -it --rm -- sh -c 'sleep 7; nslookup nginx-resolver-service'

How to access kube-apiserver on command line?

Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126

Restart container within pod

I have a pod test-1495806908-xn5jn with 2 containers. I'd like to restart one of them called container-test. Is it possible to restart a single container within a pod and how? If not, how do I restart the pod?
The pod was created using a deployment.yaml with:
kubectl create -f deployment.yaml
Is it possible to restart a single container
Not through kubectl, although depending on the setup of your cluster you can "cheat" and docker kill the-sha-goes-here, which will cause kubelet to restart the "failed" container (assuming, of course, the restart policy for the Pod says that is what it should do)
how do I restart the pod
That depends on how the Pod was created, but based on the Pod name you provided, it appears to be under the oversight of a ReplicaSet, so you can just kubectl delete pod test-1495806908-xn5jn and kubernetes will create a new one in its place (the new Pod will have a different name, so do not expect kubectl get pods to return test-1495806908-xn5jn ever again)
There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
Doing a kubectl exec POD_NAME -c CONTAINER_NAME /sbin/killall5 worked for me.
(I changed the command from reboot to /sbin/killall5 based on the below recommendations.)
Both pod and container are ephemeral, try to use the following command to stop the specific container and the k8s cluster will restart a new container.
kubectl exec -it [POD_NAME] -c [CONTAINER_NAME] -- /bin/sh -c "kill 1"
This will send a SIGTERM signal to process 1, which is the main process running in the container. All other processes will be children of process 1, and will be terminated after process 1 exits. See the kill manpage for other signals you can send.
I m using
kubectl rollout restart deployment [deployment_name]
or
kubectl delete pod [pod_name]
The whole reason for having kubernetes is so it manages the containers for you so you don't have to care so much about the lifecyle of the containers in the pod.
Since you have a deployment setup that uses replica set. You can delete the pod using kubectl delete pod test-1495806908-xn5jn and kubernetes will manage the creation of a new pod with the 2 containers without any downtime. Trying to manually restart single containers in pods negates the whole benefits of kubernetes.
All the above answers have mentioned deleting the pod...but if you have many pods of the same service then it would be tedious to delete each one of them...
Therefore, I propose the following solution, restart:
1) Set scale to zero :
kubectl scale deployment <<name>> --replicas=0 -n service
The above command will terminate all your pods with the name <<name>>
2) To start the pod again, set the replicas to more than 0
kubectl scale deployment <<name>> --replicas=2 -n service
The above command will start your pods again with 2 replicas.
We use a pretty convenient command line to force re-deployment of fresh images on integration pod.
We noticed that our alpine containers all run their "sustaining" command on PID 5. Therefore, sending it a SIGTERM signal takes the container down. imagePullPolicy being set to Always has the kubelet re-pull the latest image when it brings the container back.
kubectl exec -i [pod name] -c [container-name] -- kill -15 5
There was an issue in coredns pod, I deleted such pod by
kubectl delete pod -n=kube-system coredns-fb8b8dccf-8ggcf
Its pod will restart automatically.
kubectl exec -it POD_NAME -c CONTAINER_NAME bash - then kill 1
Assuming the container is run as root which is not recommended.
In my case when I changed the application config, I had to reboot the container which was used in a sidecar pattern, I would kill the PID for the spring boot application which is owned by the docker user.
I realize this question is old and already answered, but I thought I'd chip in with my method.
Whenever I want to do this, I just make a minor change to the pod's container's image field, which causes kubernetes to restart just the container.
If you can't switch between 2 different, but equivalent tags (like :latest / :1.2.3 where latest is actually version 1.2.3) then you can always just switch it quickly to an invalid tag (I put an X at the end like :latestX or something) and then re-edit it and remove the X straight away afterwards, this does cause the container to fail starting with an image pull error for a few seconds though.
So for example:
kubectl edit po my-pod-name
Find the spec.containers[].name you want to kill, then find it's image
apiVersion: v1
kind: Pod
metadata:
#...
spec:
containers:
- name: main-container
#...
- name: container-to-restart
image: container/image:tag
#...
You would search for your container-to-restart and then update it's image to something different which will force kubernetes to do a controlled restart for you.
Killing the process specified in the Dockerfile's CMD / ENTRYPOINT works for me. (The container restarts automatically)
Rebooting was not allowed in my container, so I had to use this workaround.
The correct, but likely less popular answer, is that if you need to restart one container in a pod then it shouldn't be in the same pod. You can't restart single containers in a pod by design. Just move the container out into it's own pod. From the docs
Pods that run a single container. The "one-container-per-Pod" model is
the most common Kubernetes use case; in this case, you can think of a
Pod as a wrapper around a single container; Kubernetes manages Pods
rather than managing the containers directly.
Note: Grouping multiple co-located and co-managed containers in a
single Pod is a relatively advanced use case. You should use this
pattern only in specific instances in which your containers are
tightly coupled.
https://kubernetes.io/docs/concepts/workloads/pods/
I was playing around with ways to restart a container. What I found for me was this solution:
Dockerfile:
...
ENTRYPOINT [ "/app/bootstrap.sh" ]
/app/bootstrap.sh:
#!/bin/bash
/app/startWhatEverYouActuallyWantToStart.sh &
tail -f /dev/null
Whenever I want to restart the container, I kill the process with tail -f /dev/null which I find with
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
Following that command, all the processes except for the one with PID==1 will be killed and the entrypoint, in my case bootstrap.sh will be executed (again).
That's for the part "restart" - which is not really a restart but it does what you wish, in the end. For the part with limiting restarting the container named container-test you could pass on the container name to the container in question (as the container name would otherwise not be available inside the container) and then you can decide whether to do the above kill.
That would be something like this in your deployment.yaml:
env:
- name: YOUR_CONTAINER_NAME
value: container-test
/app/startWhatEverYouActuallyWantToStart.sh:
#!/bin/bash
...
CONDITION_TO_RESTART=0
...
if [ "$YOUR_CONTAINER_NAME" == "container-test" -a $CONDITION_TO_RESTART -eq 1 ]; then
kill -TERM `ps --ppid 1 | grep tail | grep -v -e grep | awk '{print $1}'`
fi
Sometimes no one knows which OS the pod has, pod might not have sudo or reboot altogether.
Safer option is to take a snapshot and recreate pod.
kubectl get <pod-name> -o yaml > pod-to-be-restarted.yaml;
kubectl delete po <pod-name>;
kubectl create -f pod-to-be-restarted.yaml
kubectl delete pods POD_NAME
This command will delete the pod and restart another automatically.

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous