Restart server running inside Kubernetes Node - kubernetes

I am having a IBM cloud powered kubernetes cluster. That cluster currently have only 1 node.
I verified running the command kubectl get nodes.
There are few servers which are running in that node. I want to restart one of those server.
How can I get into the node and perform a restart for the required server?
I tried ssh, but this link says it cannot be done directly.

Seems like your main questions are:
"how to restart a pod", "how to ssh to a entity in which my service is running" and "how to see if I deleted a Pod".
First of all, most of this questions are already answered on StackOverflow. Second of all you need to get familiar with Kubernetes basic terminology and how things work in here. You can do that in any Kubernetes introduction or in documentation.
Answering the questions:
1) About restarting you can find information here. Or if you have running deployment, deleting a pod will result in pod recreation.
2) you can use kubectl execas described here:
kubectl exec -ti pod_name sh(or bash)
3) to see your pods, run kubectl get pods after you run kubectl delete pod name -n namespace you can run kubectl get pods -w to see changing status of deleted pod and new one being spawned. Or you will notice that there is a new pod running but with different NAME.

Related

deployed a service on k8s but not showing any pods weven when it failed

I have deployed a k8s service, however its not showing any pods. This is what I see
kubectl get deployments
It should create on the default namespace
kubectl get nodes (this shows me nothing)
How do I troubleshoot a failed deployment. The test-control-plane is the one deployed by kind this is the k8s one I'm using.
kubectl get nodes
If above command is not showing anything which mean there is no Nodes in your cluster so where your workload will run ?
You need to have at least one worker node in K8s cluster so deployment can schedule the POD on it and run the application.
You can check worker node using same command
kubectl get nodes
You can debug more and check the reason of issue further using
kubectl describe deployment <name of your deployment>
To find out what really went wrong, first follow the steps described in Harsh Manvar in his answer. Perhaps obtaining that information can help you find the problem. If not, check the logs of your deployment. Try to list your pods and see which ones did not boot properly, then check their logs.
You can also use the kubectl describe on pods to see in more detail what went wrong. Since you are using kind, I include a list of known errors for you.
You can also see this visual guide on troubleshooting Kubernetes deployments and 5 Tips for Troubleshooting Kubernetes Deployments.

How to debug a kubernetes cluster?

As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.
use a tool like https://github.com/kubernetes/kubernetes-dashboard
You can install kubectl and kubernetes-dashboard in a k8s cluster (https://kubernetes.io/docs/tasks/tools/install-kubectl/), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to https://kubernetes.io/
kubectl get pods
will show you all your pods and their status. A quick check to make sure that all is at least running.
If there are pods that are unhealthy, then
kubectl describe pod <pod name>
will give some more information.. eg image not found etc
kubectl log <pod name> --all
is often the next step , use -f to follow the logs as you exercise your api.
It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...

Check failed pods logs in a Kubernetes cluster

I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?
Is there any single command to check the failed pod list or restated pod list?
And reason for the restart(logs)?
Depends if you want to have detailed information or you just want to check a few last failed pods.
I would recommend you to read about Logging Architecture.
In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.
If you are using Cloud environment you can use Integrated with Cloud Logging tools (i.e. in Google Cloud Platform you can use Stackdriver).
In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.
If you want to get logs from specific pod
$ kubectl logs ${POD_NAME} -n {NAMESPACE}
First, look at the logs of the affected container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Additional information you can obtain using
$ kubectl get events -o wide --all-namespaces | grep <your condition>
Similar question was posted in this SO thread, you can check if for more details.
This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'
Also, Lens is pretty good in these situations.
Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous flag along with your kubectl logs ... cmd.

calico-policy-container on the worker node is on a restart loop. how can i check why?

I have two coreos stable machines (with latest stable version installed) to test Kubernetes. i installed kubernetes 1.5.1 using the script from https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and patched it with https://github.com/kfirufk/coreos-kubernetes-multi-node-generic-install-script.
I installed controller script on one and worker script on the other. kubectl get nodes shows both servers.
kubectl get pods --namespace=kube-system shows that calico-policy-controller-2j5dn restarts a lot. in the worker server I do see that calico-policy-controller restarts a lot. any idea how to investigate this issue further?
how can I check why it restarts? are there any logs for this container?
kubectl logs --previous $id —namespace=kube-system
i added --previous because when the controller restart it has a different random characters appended to it.
in my case that kube-policy-controller what started on one server, and requested the etcd2 certificates that where generated on a different server.

kubernetes pods spawn across all servers but kubectl only shows 1 running and 1 pending

I have new setup of Kubernetes and I created replication with 2. However what I see when I do " kubectl get pods' is that one is running another is "pending". Yet when I go to my 7 test nodes and do docker ps I see that all of them are running.
What I think is happening is that I had to change the default insecure port from 8080 to 7080 (the docker app actually runs on 8080), however I don't know how to tell if I am right, or where else to look.
Along the same vein, is there any way to setup config for kubectl where I can specify the port. Doing kubectl --server="" is a bit annoying (yes I know I can alias this).
If you changed the API port, did you also update the nodes to point them at the new port?
For the kubectl --server=... question, you can use kubectl config set-cluster to set cluster info in your ~/.kube/config file to avoid having to use --server all the time. See the following docs for details:
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-cluster.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-context.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_use-context.html