I'm seeking the answer regarding how to use the Kubernetes Python API to check health
I am using:
kubectl get --raw='/readyz?verbose'
kubectl get --raw='/readyz?verbose' is probing the /readyz endpoint of the Kubernetes apiserver.
If you want to check the current running and ready state of a Pod and its containers, you can look at the Pod's status.Conditions and status.ContainerStatuses field.
If you want to probe the health endpoints of a container directly, you can use kubectl proxy or kubectl port-forward to open a proxy connection into the Pod's network namespace, then probe the health endpoint through that proxy.
Related
I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.
The deployment went through via helm install process but the ingress reports a certain warning error that says Some backend services are in UNHEALTHY state. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.
What could I do to make the ingress come back to a healthy state? Thanks
Picture of warning error on GKE UI
Without more details it is hard to determine the exact cause.
As first point I want to mention, that your error message is Some backend services are in UNHEALTHY state, not All backend services are in UNHEALTHY state. It indicates that only a few of your backends are affected.
There might be tons of reasons, if you are using GCP Ingress or Nginx Ingress, your configuration of externalTrafficPolicy, if you are using preemptive nodes, your livenessProbe and readinessProbe, health checks, etc.
In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.
Using $ kubectl get po -n <namespace> check if all your pods are working correctly, that all containers within pods are Ready and pod status is Running. Eventually check logs of suspicious pod $ kubectl logs <podname> -c <containerName>. In general you should check all pods the load balancer is pointing to,
Confirm if livenessProbe and readinessProbe are configured properly and response is 200,
Describe your ingress $ kubectl describe ingress <yourIngressName> and check backends,
Check if you've configured your health checks properly according to GKE Ingress for HTTP(S) Load Balancing - Health Checks guide.
If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).
Useful links:
kubernetes unhealthy ingress backend
GKE Ingress shows unhealthy backend services
In GKE you can define BackendConfig. To define custom health checks. you can configure this using the below link to make the ingress backend to be in a HEALTHY state.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health
If you have kubectl access to your pods, you can run kubectl get pod, and then kubctl logs -f <pod-name>.
Review the logs and find the error(s).
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml
Hi i'm a newbie for k8s and i was wondering where and how kubectl sends requests to the kube api-server.
So for example, if i'm sending a request such as "kubectl get pods --all-namespaces"(and my default kubernetes endpoints is set as "192.168.64.2:8443"), my understanding is that this would translate to a https request such as "https://192.168.64.2:8443/api/v1/pods......etc" and kubectl would use authentication stored in .kube/config file. Am i right?
And i also have a metrics-server up and running on endpoint "172.17.0.8:4443" but how does kubectl know to use this ip when i run "kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/<NODE_NAME> | jq"? are all kubectl commands directed to one ip?
Thanks in advance.
Authentication is one of the steps that kubectl achieves. You could see what happens when a command run with verbose ability. for example,
kubectl get pods -v9 --all-namespaces
Kubernetes know resource definitions and their implementation, you could check resource types with,
kubectl api-resources
So Kubernetes api-server knows which resources are metric-server and how that could call.
All kubectl requests go to the api-server. The api-server can either answer by itself or it delegates to other components, for example an extension api server.
I would like to know if port-forward blocks other requests to the pod like redis, mongo, etc.
If I use kubectl port-foward command with mongo port for example, the mongo server will not receive the data ?
As per your comment, kubectl port-forward does not block any exposed traffic to the target pod. So adding portforward won't affect others.
What portforward does is simply making a specific request to the API server. (see doc)
Going further, I don't think port-forward makes pod more "dangerous" (vulnerable against security), however it is generally used for debugging to scope into the pod, not to expose a service in the pod. Use Nodeport service for production.
Plus, port-forward has default timeout setting in kubelet.