use of kubectl log in readiness probe - kubernetes

I have a server which is running inside of a kubernetes pod.
Its log output can be retrieved using "kubectl logs".
The application goes through some start up before it is ready to process incoming messages.
It indicates its readiness through a log message.
The "kubectl logs" command is not available from within the pod. I think it would be insecure to even try to install it.
Is there a way of either:
getting the log from within the container? or
running a readiness probe that is executed outside of the container?
(rather than as a docker exec)
Here are some options I've considered:
Redirecting the output to a log file loses it from "Kubectl log"
Teeing it to a log file avoids that limitation but creates an unnecessary duplicate log.
stdout and stderr of the application are anonymous pipes (to kubernetes) so eavesdropping on /proc/1/fd/1 or /proc/1/fd/2 will not work.
A better option may be to use the http API. For example this question
kubectl proxy --port=8080
And from within the container:
curl -XGET http://127.0.0.1:8080/api
However I get an error:
Starting to serve on 127.0.0.1:8080
I0121 17:05:38.928590 49105 log.go:172] http: Accept error: accept tcp 127.0.0.1:8080: accept4: too many open files; retrying in 5ms
2020/01/21 17:05:38 http: proxy error: dial tcp 127.0.0.1:8080: socket: too many open files
Does anyone have a solution or a better idea?

You can actually do what you want. Create a kubernetes "serviceaccount" object with permissions just to do what you want, use the account for your health check pod, and just run kubectl logs as you described. You install kubectl, but limit the permissions avaialable to it.
However, there's a reason you don't find examples of that- its not a great way of building this thing. Is there really no way that you can do a health check endpoint in your app? That is so much more convenient for your purposes.
Finally, if the answer to that really is "NO", could you have your app write a ready file? Instead of print "READY" do touch /app/readyfile. then your health check can just check if that file exists. (to make this work, you would have to create a volume and mount it at /app in both your app container and the health check container so they can both see the generated file)

Too many open files was because I did not run kubectl with sudo.
So the log can be retrieved via the http API with:
sudo kubectl proxy --port 8080
And then from inside the app:
curl -XGET http://127.0.0.1:8080/api/v1/namespaces/default/pods/mypodnamehere/log
That said, I agree with #Paul Becotte that having the application created a ready file would be a better design.

Related

How to use proxy with Kubernetes

I'm new with Kubernetes and I'm just starting out. My Kubernetes server is running at: 127.0.0.1:3000 and I want it to run at 0.0.0.0:3000. I tried to use
kube proxy --bind-address"0.0.0.0"
but I'm getting a
kube: command not found
error.
I've also tried to use
kubectl proxy --address="0.0.0.0"
although it says:
Starting to serve on [::]:8001
but I'm unable to write any commands after that. Is there any way that enables me to use "0.0.0.0" as my IP address and I'm also able to write commands after binding it to the said IP address? Can i change something in my yaml file or kubeconfig file or add a new file for this purpose that enables me to do so?
Use --port argument to change the port
kubectl proxy --address=0.0.0.0 --port=8001
Starting to serve on [::]:8001
Open another terminal to run commands against ip:8001
Another mistake would be to issue "kube" Command, as you maybe wanted to use "kubectl".
As #confused genius said above, you have to use.
kubectl proxy --address=0.0.0.0 --port=3000
Starting to serve on [::]:3000

Determine status of state of Kubernetes port forwarding

I am trying to find a way to determine the status of the "kubectl port-forward" command.
There is a way to determine the readiness of a pod, a node...: i.e. "kubectl get pods" etc...
Is there a way to determine if the kubectl port-forward command has completed and ready to work?
Thank you.
I have the same understanding as you #VKR.
The way that I choose to solve this was to have a loop with a curl every second to check the status of the forwarded port. It works but I had hoped for a prebaked solution to this.
do curl:6379
timer=0
while curl is false and timer <100
timer++
curl:6379
sleep 1
Thank you #Nicola and #David, I will keep those in mind when I get past development testing.
Answering directly to your question - no, you can not determine the status of kubectl port-forward command.
The only way of determining what is going on in the background is to inspect the output of this command.
The output will be something like:
Forwarding from 127.0.0.1:6379 -> 6379
Forwarding from [::1]:6379 -> 6379
I may suggest using service type NodePort instead of port-forward.
Using the NodePort, you are able to expose your app as a service and have access from outside the Kubernetes.
For more examples use this url.

How can I access an internal HTTP port of a Kubernetes node in Google Cloud Platform

I have a load-balanced service running in a Kubernetes cluster on the Google Cloud Platform. The individual servers expose some debugging information via a particular URL path. I would like to be able to access those individual server URLs, otherwise I just get whichever server the load balancer sends the request to.
What is the easiest way to get access to those internal nodes? Ideally, I'd like to be able to access them via a browser, but if I can only access via a command line (e.g. via ssh or Google Cloud Shell) I'm willing to run curl to get the debugging info.
I think the simplest tool for you would be kubectl proxy or maybe even simpler kubectl port-forward. With the first you can use one endpoint and the apiserver ability to proxy to particular pod by providing appropriate URL.
kubectl proxy
After running kubectl proxy you should be able to open http://127.0.0.1:8001/ in your local browser and see a bunch of paths available on the API server. From there you can proceed with URL like ie. http://127.0.0.1:8001/api/v1/namespaces/default/pods/my-pod-name:80/proxy/ which will proxy to port 80 of your particular pod.
kubectl port-forward
Will do similar but directly to port on your pod : kubectl port-forward my-pod-name 8081:80. At that point any request to 127.0.0.1:8081 will be forwarded to your pods port 80
Port Forward can be used as described in answer from Radek, and it has many advantages. The disadvantage is that it is quite slow, and if you are having a script doing many calles, there is another option for you.
kubectl run curl-mark-friedman --image=radial/busyboxplus:curl -i --tty --rm
This will create a new POD on you network with a busybox that includes the curl command. You can now use interactive mode in that POD to execute curl commands to other PODS from within the network.
You can find many images with the tools included that you like on docker hub. If you for example need jq, there is an image for that:
kubectl run curl-jq-mark-friedman --image=devorbitus/ubuntu-bash-jq-curl -i --tty --rm
The --rm option is used to remove the POD when you are done with it. If you want the POD to stay alive, just remove that option. You may then attach to that POD again using:
kubectl get pods | grep curl-mark-friedman <- get your <POD ID> from here.
kubectl attach <POD ID> -c curl-mark-friedman -i -t

Monitor and take action based on pod log event

I have deployed PagerBot https://github.com/stripe-contrib/pagerbot to our internal k8s cluster as a learning opportunity. I had fun writing a helm chart for it!
The bot appears to disconnect from slack at an unknown time and never reconnect. I kill the pod and the deployment recreates it and it connects again (we are using the Slack RTM option).
The pod logs the following entry when it disconnects:
2018-02-24 02:31:14.382590 I [9:34765020] PagerBot::SlackRTMAdapter -- Closed connection to chat. --
I want to learn a method of monitoring for this log entry and taking action. Initially I thought a Liveness probe would be the way to go using a command that returns non-zero when this entry is logged. But the logs aren't stored internally to the container (that I can see).
How do you monitor and take action based on logs that can be seen using kubectl logs pod-name?
Can I achieve this in our Prometheus test deployment? Should I be using a known k8s feature?
I would argue the best course of action is to extend pagerbot to surface more than just the string literal pong in its /ping endpoint, then use that as its livelinessProbe, with a close second being to teach the thing to just reconnect, as that's almost certainly cheaper than tearing down the Pod
Having said that, one approach you may consider is a sidecar container that uses the Pod's service account credentials to monitor the sibling's container (akin to if kubectl logs -f -c pagerbot $my_pod_name | grep "Closed connection to chat"; then kill -9 $pagerbot_pid; fi type deal). That is a little awkward, but I can't immediately think of why it wouldn't work
I ended up landing on a "liveness probe" to solve my problem. I've added the following to my deployment for the pageyBot deployment:
livenessProbe:
exec:
command:
- bash
- -c
- "ss -an | grep -q 'EST.*:443 *$'"
initialDelaySeconds: 120
periodSeconds: 60
Basically tests to see if a connection is established for 443 which we noticed goes away when the bot disconnects.

k8s API server is down due to misconfiguration, how to bring it up again?

I was trying to add a command line flag to the API server. In my setup, it was running as a daemon set inside the k8s cluster so I got the daemon set manifest using kubectl, updated it, and executed kubectl apply -f apiserver.yaml (I know, this was not a good idea).
Of course, the new yaml file I wrote had an error so the API server is not starting anymore and I can't use kubectl to update it. I have an ssh connection to the node where it was running and I can see how the kubelet is trying to run the apiserver pod every few seconds with the ill-formed command. I am trying to configure the kubelet service to use the correct api-server command but am not being able to do so.
Any ideas?
The API server definition usually lives in /etc/kubernetes/manifests - Edit the configuration there rather than at the API level