Kubectl wait for service to get external ip - kubernetes

I'm trying to use kubectl to wait for a service to get an external ip assigned. I've been trying to use the below just to get started
kubectl wait --for='jsonpath={.spec.externalTrafficPolicy==Cluster}' --timeout=30s --namespace cloud-endpoints svc/esp-echo
But I keep getting the below error message
error: unrecognized condition: "jsonpath={.spec.externalTrafficPolicy==Cluster}"

It is not possible to pass arbitrary jsonpath and there is already a request for the feature.
However, you can use a bash script with some sleep and monitor the service using other kubectl commands:
kubectl get --namespace cloud-endpoints svc/esp-echo --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"
The above command will return the external IP for the LoadBalancer service for example.
You can write a simple bash file using the above as:
#!/bin/bash
ip=""
while [ -z $ip ]; do
echo "Waiting for external IP"
ip=$(kubectl get svc $1 --namespace cloud-endpoints --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}")
[ -z "$ip" ] && sleep 10
done
echo 'Found external IP: '$ip

Updated from Krishna Chaurasia's answer, kubectl have implemented the feature to be able to wait on arbitrary jsonpath value now.
However, the catch is the value will only be primitive value, excluding nested primitive value (map[string]interface{} or []interface{})

Right now kubectl wait isn't suitable for this problem. From Kubernetes v1.23 on (see this merged PR) using jsonpath together with kubectl wait for this will most likely be possible. Until then you can do this:
until kubectl get svc/esp-echo --namespace cloud-endpoints --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done
or even enhance the command using timeout (brew install coreutils on a Mac) to prevent the command from running infinitely:
timeout 10s bash -c 'until kubectl get service/tekton-dashboard-external-svc-manual -n tekton-pipelines --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done'
We the same problem on AWS EKS, where a AWS Elastic Load Balancer (ELB) gets provisioned when one creates a Service with the type LoadBalancer. That's most likely comparable to other cloud providers behaviour since it's part of the official kubernetes.io docs:
On cloud providers which support external load balancers, setting the
type field to LoadBalancer provisions a load balancer for your
Service. The actual creation of the load balancer happens
asynchronously, and information about the provisioned balancer is
published in the Service's .status.loadBalancer field.
Krishna Chaurasia's answer also used this field, but we wanted to have a one-liner which could be used without extra bash script just like a kubectl wait. So our solution incorporates the solution stated on serverfault about "watching" the output of a command until a particular string is observed and then exit with using the until loop.

You need to provide a condition. Like:
kubectl -n foobar wait --for=condition=complete --timeout=32s foo/bar
Here is a good article that explains that: https://mrkaran.dev/posts/kubectl-wait/
In your case, you might use one of the k8s probes.

Related

Prometheus - Monitoring command output in container

I need to monitoring a lot of legacy containers in my eks cluster that having a nfs mountpath. To map nfs directory in container i using nfs-client helm chart.
I need to monitor when my mountpath for some reason is lost, and the only way that i find to do that is exec a command in container.
#!/bin/bash
df -h | grep ip_of_my_nfs_server | wc -l
if the output above returns 1 i know that my nfs mountpath is ok.
Anybody knows some whay that monitoring an output script exec in container with prometheus?
Thanks!
As Matt has pointed out in the comments: first order of business should be to see if you can simply facilitate your monitoring requirement from node_exporter.
Below is a more generic answer on collecting metrics from arbitrary shell commands.
Prometheus is a pull-based monitoring system. You configure it with "scrape targets": these are effectively just HTTP endpoints that expose metrics in a specific format. Some target needs to be alive for long enough to allow it to be scraped.
The two most obvious options you have are:
Wrap your logic in a long-running process that exposes this metric on an HTTP endpoint, and configure it as a scrape target
Spin up an instance of pushgateway, and configure it as a scrape target , and have your command push its metrics there
Based on the little information you provided, the latter option seems like the most sane one. Important and relevant note from the README:
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
Your command would look something like:
#!/bin/bash
printf "mount_path_up %d" $(df -h | grep ip_of_my_nfs_server | wc -l) | curl --data-binary #- http://pushgateway.example.org:9091/metrics/job/some_job_name

Check failed pods logs in a Kubernetes cluster

I have a Kubernetes cluster, in which different pods are running in different namespaces. How do I know if any pod failed?
Is there any single command to check the failed pod list or restated pod list?
And reason for the restart(logs)?
Depends if you want to have detailed information or you just want to check a few last failed pods.
I would recommend you to read about Logging Architecture.
In case you would like to have this detailed information you should use 3rd party software, as its described in Kubernetes Documentation - Logging Using Elasticsearch and Kibana or another one FluentD.
If you are using Cloud environment you can use Integrated with Cloud Logging tools (i.e. in Google Cloud Platform you can use Stackdriver).
In case you want to check logs to find reason why pod failed, it's good described in K8s docs Debug Running Pods.
If you want to get logs from specific pod
$ kubectl logs ${POD_NAME} -n {NAMESPACE}
First, look at the logs of the affected container:
$ kubectl logs ${POD_NAME} ${CONTAINER_NAME}
If your container has previously crashed, you can access the previous container's crash log with:
$ kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Additional information you can obtain using
$ kubectl get events -o wide --all-namespaces | grep <your condition>
Similar question was posted in this SO thread, you can check if for more details.
This'll work: kubectl get pods --all-namespaces | | grep -Ev '([0-9]+)/\1'
Also, Lens is pretty good in these situations.
Most of the times, the reason for app failure is printed in the lasting logs of the previous pod. You can see them by simply putting --previous flag along with your kubectl logs ... cmd.

Ensure services exist

I am going to deploy Keycloak on my K8S cluster and as a database I have chosen PostgreSQL.
To adjust the business requirements, we have to add additional features to Keycloak, for example custom theme, etc. That means, for every changes on Keycloak we are going to trigger CI/CD pipeline. We use Drone for CI and ArgoCD for CD.
In the pipeline, before it hits the CD part, we would like to ensure, that PostgreSQL is up and running.
The question is, does it exist a tool for K8S, that we can validate, if particular services are up and running.
"Up and running" != "Exists"
1: To check if a service exists, just do a kubectl get service <svc>
2: To check if it has active endpoints do kubectl get endpoints <svc>
3: You can also check if backing pods are in ready state.
2 & 3 requires readiness probe to be properly configured on the pod/deployment
Radek is right in his answer but I would like to expand on it with the help of the official docs. To make sure that the service exists and is working properly you need to:
Make sure that Pods are actually running and serving: kubectl get pods -o go-template='{{range .items}}{{.status.podIP}}{{"\n"}}{{end}}'
Check if Service exists: kubectl get svc
Check if Endopints exist: kubectl get endopints
If needed, check if the Service is working by DNS name: nslookup hostnames (from a Pod in the same Namespace) or nslookup hostnames.<namespace> (if it is in a different one)
If needed, check if the Service is working by IP: for i in $(seq 1 3); do
wget -qO- <IP:port>
done
Make sure that the Service is defined correctly: kubectl get service <service name> -o json
Check if the kube-proxy working: ps auxw | grep kube-proxy
If any of the above is causing a problem, you can find the troubleshooting steps in the link above.
Regarding your question in the comments: I don't think there is a n easier way considering that you need to make sure that everything is working fine. You can skip some of the steps but that would depend on your use case.
I hope it helps.

Script that checks I'm deploying to the correct kubernetes cluster

I have a script that deploys my application to my kubernetes cluster. However, if my current kubectl context is pointing at the wrong cluster, I can easily end up deploying my application to a cluster that I did not intend to deploy it to. What is a good way to check (from inside a script) that I'm deploying to the right cluster?
I don't really want to hardcode a specific kubectl context name, since different developers on my team have different conventions for how to name their kubectl contexts.
Instead, I'd like something more like if $(kubectl get cluster-name) != "expected-clsuter-name" then error.
#!/bin/bash
if [ $(kubectl config current-context) != "your-cluster-name" ]
then
echo "Do some error!!!"
return
fi
echo "Do some kubectl command"
Above script get the cluster name and match with your-desired-cluster name. If mismatch then give error. Otherwise run desire kubectl command.
For each cluster run kubectl cluster-info once to see what the IP/host for master is - that should be stable for the cluster and not vary with the name in the kubectl context (which developers might be setting differently). Then capture that in the script with export MASTERA=<HOST/IP> where that's the master for cluster A. Then the script can do:
kubectl cluster-info | grep -q $MASTERA && echo 'on MASTERA'
Or use an if-else:
if kubectl cluster-info | grep -q $MASTERA; then
echo 'on $MASTERA'
else
exit 1
fi

How can I access an internal HTTP port of a Kubernetes node in Google Cloud Platform

I have a load-balanced service running in a Kubernetes cluster on the Google Cloud Platform. The individual servers expose some debugging information via a particular URL path. I would like to be able to access those individual server URLs, otherwise I just get whichever server the load balancer sends the request to.
What is the easiest way to get access to those internal nodes? Ideally, I'd like to be able to access them via a browser, but if I can only access via a command line (e.g. via ssh or Google Cloud Shell) I'm willing to run curl to get the debugging info.
I think the simplest tool for you would be kubectl proxy or maybe even simpler kubectl port-forward. With the first you can use one endpoint and the apiserver ability to proxy to particular pod by providing appropriate URL.
kubectl proxy
After running kubectl proxy you should be able to open http://127.0.0.1:8001/ in your local browser and see a bunch of paths available on the API server. From there you can proceed with URL like ie. http://127.0.0.1:8001/api/v1/namespaces/default/pods/my-pod-name:80/proxy/ which will proxy to port 80 of your particular pod.
kubectl port-forward
Will do similar but directly to port on your pod : kubectl port-forward my-pod-name 8081:80. At that point any request to 127.0.0.1:8081 will be forwarded to your pods port 80
Port Forward can be used as described in answer from Radek, and it has many advantages. The disadvantage is that it is quite slow, and if you are having a script doing many calles, there is another option for you.
kubectl run curl-mark-friedman --image=radial/busyboxplus:curl -i --tty --rm
This will create a new POD on you network with a busybox that includes the curl command. You can now use interactive mode in that POD to execute curl commands to other PODS from within the network.
You can find many images with the tools included that you like on docker hub. If you for example need jq, there is an image for that:
kubectl run curl-jq-mark-friedman --image=devorbitus/ubuntu-bash-jq-curl -i --tty --rm
The --rm option is used to remove the POD when you are done with it. If you want the POD to stay alive, just remove that option. You may then attach to that POD again using:
kubectl get pods | grep curl-mark-friedman <- get your <POD ID> from here.
kubectl attach <POD ID> -c curl-mark-friedman -i -t