Ensure services exist - postgresql

I am going to deploy Keycloak on my K8S cluster and as a database I have chosen PostgreSQL.
To adjust the business requirements, we have to add additional features to Keycloak, for example custom theme, etc. That means, for every changes on Keycloak we are going to trigger CI/CD pipeline. We use Drone for CI and ArgoCD for CD.
In the pipeline, before it hits the CD part, we would like to ensure, that PostgreSQL is up and running.
The question is, does it exist a tool for K8S, that we can validate, if particular services are up and running.

"Up and running" != "Exists"
1: To check if a service exists, just do a kubectl get service <svc>
2: To check if it has active endpoints do kubectl get endpoints <svc>
3: You can also check if backing pods are in ready state.
2 & 3 requires readiness probe to be properly configured on the pod/deployment

Radek is right in his answer but I would like to expand on it with the help of the official docs. To make sure that the service exists and is working properly you need to:
Make sure that Pods are actually running and serving: kubectl get pods -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
Check if Service exists: kubectl get svc
Check if Endopints exist: kubectl get endopints
If needed, check if the Service is working by DNS name: nslookup hostnames (from a Pod in the same Namespace) or nslookup hostnames.<namespace> (if it is in a different one)
If needed, check if the Service is working by IP: for i in $(seq 1 3); do
wget -qO- <IP:port>
done
Make sure that the Service is defined correctly: kubectl get service <service name> -o json
Check if the kube-proxy working: ps auxw | grep kube-proxy
If any of the above is causing a problem, you can find the troubleshooting steps in the link above.
Regarding your question in the comments: I don't think there is a n easier way considering that you need to make sure that everything is working fine. You can skip some of the steps but that would depend on your use case.
I hope it helps.

Related

How to get a pod external endpoint in Kubernetes?

Let's suppose I have pods deployed in a Kubernetes cluster and I have exposed them via a NodePort service. Is there a way to get the pod external endpoint in one command ?
For example:
kubectl <cmd>
Response : <IP_OF_NODE_HOSTING_THE_POD>:30120 ( 30120 being the nodeport external port )
The requirement is a complex one and requires query to list object. I am going to explain with assumptions. Additionally if you need internal address then you can use endpoint object(ep), because the target resolution is done at the endpoint level.
Assumption: 1 Pod, 1 NodePort service(32320->80); both with name nginx
The following command will work with the stated assumption and I hope this will give you an idea for the best approach to follow for your requirement.
Note: This answer is valid based on the assumption stated above. However for more generalized solution I recommend to use -o jsonpath='{range.. for this type of complex queries. For now the following command will work.
command:
kubectl get pods,ep,svc nginx -o jsonpath=' External:http://{..status.hostIP}{":"}{..nodePort}{"\n Internal:http://"}{..subsets..ip}{":"}{..spec.ports..port}{"\n"}'
Output:
External:http://192.168.5.21:32320
Internal:http://10.44.0.21:80
If the node port service is known then something like kubectl get svc <svc_name> -o=jsonpath='{.spec.clusterIP}:{.spec.ports[0].nodePort} should work.

How to debug a kubernetes cluster?

As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.
use a tool like https://github.com/kubernetes/kubernetes-dashboard
You can install kubectl and kubernetes-dashboard in a k8s cluster (https://kubernetes.io/docs/tasks/tools/install-kubectl/), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to https://kubernetes.io/
kubectl get pods
will show you all your pods and their status. A quick check to make sure that all is at least running.
If there are pods that are unhealthy, then
kubectl describe pod <pod name>
will give some more information.. eg image not found etc
kubectl log <pod name> --all
is often the next step , use -f to follow the logs as you exercise your api.
It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...

Restart server running inside Kubernetes Node

I am having a IBM cloud powered kubernetes cluster. That cluster currently have only 1 node.
I verified running the command kubectl get nodes.
There are few servers which are running in that node. I want to restart one of those server.
How can I get into the node and perform a restart for the required server?
I tried ssh, but this link says it cannot be done directly.
Seems like your main questions are:
"how to restart a pod", "how to ssh to a entity in which my service is running" and "how to see if I deleted a Pod".
First of all, most of this questions are already answered on StackOverflow. Second of all you need to get familiar with Kubernetes basic terminology and how things work in here. You can do that in any Kubernetes introduction or in documentation.
Answering the questions:
1) About restarting you can find information here. Or if you have running deployment, deleting a pod will result in pod recreation.
2) you can use kubectl execas described here:
kubectl exec -ti pod_name sh(or bash)
3) to see your pods, run kubectl get pods after you run kubectl delete pod name -n namespace you can run kubectl get pods -w to see changing status of deleted pod and new one being spawned. Or you will notice that there is a new pod running but with different NAME.

Reload Kubernetes ReplicationController to get newly created Service

Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services?
Example:
I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them
kubectl delete pod <your-pod-name>
if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome.
So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need.
kubectl scale --replicas=0 rc <your-rc>
kubectl scale --replicas=<n> rc <your-rc>
By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
If you want the same pod to have the new service, the clean answer is no. You could (I strongly suggest not to do this) run kubectl exec <pod-name> -c <containers> -- export <service env var name>=<service env var value>. But your best bet is to run kubectl delete <pod-name> and let your replication controller handle the work.
I've ran into a similar issue for services being ran outside of kubernetes, say a DB for instance, to address this I've been creating this https://github.com/cpg1111/kubongo which updates the service's endpoint without deleting the pods. That same idea can also be applied to other pods in kubernetes to automate the service update. Basically it watches a specific service, and when it's IP changes for whatever reason it updates all the pods without deleting them. This does use the same code as kubectl exec however it is automated, sanitizes input and ensures the export is executed on all pods.
What do you mean with 'reapply'?
The pods to which the services point are generally selected based on labels.In other words, you can add / remove labels from the pods to include / exclude them from a service.
Read here for more information about defining services: http://kubernetes.io/v1.1/docs/user-guide/services.html#defining-a-service
And here for more information about labels: http://kubernetes.io/v1.1/docs/user-guide/labels.html
Hope it helps!

kubernetes pods spawn across all servers but kubectl only shows 1 running and 1 pending

I have new setup of Kubernetes and I created replication with 2. However what I see when I do " kubectl get pods' is that one is running another is "pending". Yet when I go to my 7 test nodes and do docker ps I see that all of them are running.
What I think is happening is that I had to change the default insecure port from 8080 to 7080 (the docker app actually runs on 8080), however I don't know how to tell if I am right, or where else to look.
Along the same vein, is there any way to setup config for kubectl where I can specify the port. Doing kubectl --server="" is a bit annoying (yes I know I can alias this).
If you changed the API port, did you also update the nodes to point them at the new port?
For the kubectl --server=... question, you can use kubectl config set-cluster to set cluster info in your ~/.kube/config file to avoid having to use --server all the time. See the following docs for details:
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-cluster.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-context.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_use-context.html