Determine status of state of Kubernetes port forwarding - kubernetes

I am trying to find a way to determine the status of the "kubectl port-forward" command.
There is a way to determine the readiness of a pod, a node...: i.e. "kubectl get pods" etc...
Is there a way to determine if the kubectl port-forward command has completed and ready to work?
Thank you.

I have the same understanding as you #VKR.
The way that I choose to solve this was to have a loop with a curl every second to check the status of the forwarded port. It works but I had hoped for a prebaked solution to this.
do curl:6379
timer=0
while curl is false and timer <100
timer++
curl:6379
sleep 1
Thank you #Nicola and #David, I will keep those in mind when I get past development testing.

Answering directly to your question - no, you can not determine the status of kubectl port-forward command.
The only way of determining what is going on in the background is to inspect the output of this command.
The output will be something like:
Forwarding from 127.0.0.1:6379 -> 6379
Forwarding from [::1]:6379 -> 6379
I may suggest using service type NodePort instead of port-forward.
Using the NodePort, you are able to expose your app as a service and have access from outside the Kubernetes.
For more examples use this url.

Related

How to get a pod external endpoint in Kubernetes?

Let's suppose I have pods deployed in a Kubernetes cluster and I have exposed them via a NodePort service. Is there a way to get the pod external endpoint in one command ?
For example:
kubectl <cmd>
Response : <IP_OF_NODE_HOSTING_THE_POD>:30120 ( 30120 being the nodeport external port )
The requirement is a complex one and requires query to list object. I am going to explain with assumptions. Additionally if you need internal address then you can use endpoint object(ep), because the target resolution is done at the endpoint level.
Assumption: 1 Pod, 1 NodePort service(32320->80); both with name nginx
The following command will work with the stated assumption and I hope this will give you an idea for the best approach to follow for your requirement.
Note: This answer is valid based on the assumption stated above. However for more generalized solution I recommend to use -o jsonpath='{range.. for this type of complex queries. For now the following command will work.
command:
kubectl get pods,ep,svc nginx -o jsonpath=' External:http://{..status.hostIP}{":"}{..nodePort}{"\n Internal:http://"}{..subsets..ip}{":"}{..spec.ports..port}{"\n"}'
Output:
External:http://192.168.5.21:32320
Internal:http://10.44.0.21:80
If the node port service is known then something like kubectl get svc <svc_name> -o=jsonpath='{.spec.clusterIP}:{.spec.ports[0].nodePort} should work.

Ensure services exist

I am going to deploy Keycloak on my K8S cluster and as a database I have chosen PostgreSQL.
To adjust the business requirements, we have to add additional features to Keycloak, for example custom theme, etc. That means, for every changes on Keycloak we are going to trigger CI/CD pipeline. We use Drone for CI and ArgoCD for CD.
In the pipeline, before it hits the CD part, we would like to ensure, that PostgreSQL is up and running.
The question is, does it exist a tool for K8S, that we can validate, if particular services are up and running.
"Up and running" != "Exists"
1: To check if a service exists, just do a kubectl get service <svc>
2: To check if it has active endpoints do kubectl get endpoints <svc>
3: You can also check if backing pods are in ready state.
2 & 3 requires readiness probe to be properly configured on the pod/deployment
Radek is right in his answer but I would like to expand on it with the help of the official docs. To make sure that the service exists and is working properly you need to:
Make sure that Pods are actually running and serving: kubectl get pods -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
Check if Service exists: kubectl get svc
Check if Endopints exist: kubectl get endopints
If needed, check if the Service is working by DNS name: nslookup hostnames (from a Pod in the same Namespace) or nslookup hostnames.<namespace> (if it is in a different one)
If needed, check if the Service is working by IP: for i in $(seq 1 3); do
wget -qO- <IP:port>
done
Make sure that the Service is defined correctly: kubectl get service <service name> -o json
Check if the kube-proxy working: ps auxw | grep kube-proxy
If any of the above is causing a problem, you can find the troubleshooting steps in the link above.
Regarding your question in the comments: I don't think there is a n easier way considering that you need to make sure that everything is working fine. You can skip some of the steps but that would depend on your use case.
I hope it helps.

use of kubectl log in readiness probe

I have a server which is running inside of a kubernetes pod.
Its log output can be retrieved using "kubectl logs".
The application goes through some start up before it is ready to process incoming messages.
It indicates its readiness through a log message.
The "kubectl logs" command is not available from within the pod. I think it would be insecure to even try to install it.
Is there a way of either:
getting the log from within the container? or
running a readiness probe that is executed outside of the container?
(rather than as a docker exec)
Here are some options I've considered:
Redirecting the output to a log file loses it from "Kubectl log"
Teeing it to a log file avoids that limitation but creates an unnecessary duplicate log.
stdout and stderr of the application are anonymous pipes (to kubernetes) so eavesdropping on /proc/1/fd/1 or /proc/1/fd/2 will not work.
A better option may be to use the http API. For example this question
kubectl proxy --port=8080
And from within the container:
curl -XGET http://127.0.0.1:8080/api
However I get an error:
Starting to serve on 127.0.0.1:8080
I0121 17:05:38.928590 49105 log.go:172] http: Accept error: accept tcp 127.0.0.1:8080: accept4: too many open files; retrying in 5ms
2020/01/21 17:05:38 http: proxy error: dial tcp 127.0.0.1:8080: socket: too many open files
Does anyone have a solution or a better idea?
You can actually do what you want. Create a kubernetes "serviceaccount" object with permissions just to do what you want, use the account for your health check pod, and just run kubectl logs as you described. You install kubectl, but limit the permissions avaialable to it.
However, there's a reason you don't find examples of that- its not a great way of building this thing. Is there really no way that you can do a health check endpoint in your app? That is so much more convenient for your purposes.
Finally, if the answer to that really is "NO", could you have your app write a ready file? Instead of print "READY" do touch /app/readyfile. then your health check can just check if that file exists. (to make this work, you would have to create a volume and mount it at /app in both your app container and the health check container so they can both see the generated file)
Too many open files was because I did not run kubectl with sudo.
So the log can be retrieved via the http API with:
sudo kubectl proxy --port 8080
And then from inside the app:
curl -XGET http://127.0.0.1:8080/api/v1/namespaces/default/pods/mypodnamehere/log
That said, I agree with #Paul Becotte that having the application created a ready file would be a better design.

Is there a method in kubernetes client replacing "kubectl port-forward" command

I need to forward port of one Kubernetes pods. One possible way is to execute kubectl command like bellow:
kubectl port-forward podm-resource-manager-56b9ccd59c-8pmdn 8080
Is there a way to achieve the same using python (for example python kubernetes-client)?
The method connect_get_namespaced_pod_portforward is available in in the python kubernetes-client to do a port forward.

kubernetes pods spawn across all servers but kubectl only shows 1 running and 1 pending

I have new setup of Kubernetes and I created replication with 2. However what I see when I do " kubectl get pods' is that one is running another is "pending". Yet when I go to my 7 test nodes and do docker ps I see that all of them are running.
What I think is happening is that I had to change the default insecure port from 8080 to 7080 (the docker app actually runs on 8080), however I don't know how to tell if I am right, or where else to look.
Along the same vein, is there any way to setup config for kubectl where I can specify the port. Doing kubectl --server="" is a bit annoying (yes I know I can alias this).
If you changed the API port, did you also update the nodes to point them at the new port?
For the kubectl --server=... question, you can use kubectl config set-cluster to set cluster info in your ~/.kube/config file to avoid having to use --server all the time. See the following docs for details:
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-cluster.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-context.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_use-context.html