I am having real trouble understanding how I am suppose to debug my current situation. I have followed the setup instructions from https://docs.substra.org/en/stable/contributing/getting-started.html#
There is a backend service which was created as a ClusterIP, and therefore can not be accessed from the host.
I created a load balancer for this purpose. using the command
kubectl expose deployment deployment_name --port=8000 --target-port=8000 \
--name=lb_service --type=LoadBalancer
However, the attempt to access the backend service failed when I use the LoadBalancer Ingress ip and NodePort port with a connection timeout. I like to see the relevant logs to check where the problem occurred. However, apparently kubectl logs service only shows logs for pods, whereas the load balancer, according to the kubectl expose command is attached to the deployment. Therefore, I am not able to see any logs related either to the load balancer service, or the deployment component.
When I looked at the pod which is supposed to be hosting the deployment, the log showed no error.
Can someone point out where do I look for logs that can debug this failed connectivity?
You probably need to look at the ingress logs, se this page from the documentation: https://kubernetes.github.io/ingress-nginx/troubleshooting/.
it is true that you can only get logs from pods. However, that is sufficient to see the relevant error messages.
Related
I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.
The deployment went through via helm install process but the ingress reports a certain warning error that says Some backend services are in UNHEALTHY state. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.
What could I do to make the ingress come back to a healthy state? Thanks
Picture of warning error on GKE UI
Without more details it is hard to determine the exact cause.
As first point I want to mention, that your error message is Some backend services are in UNHEALTHY state, not All backend services are in UNHEALTHY state. It indicates that only a few of your backends are affected.
There might be tons of reasons, if you are using GCP Ingress or Nginx Ingress, your configuration of externalTrafficPolicy, if you are using preemptive nodes, your livenessProbe and readinessProbe, health checks, etc.
In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.
Using $ kubectl get po -n <namespace> check if all your pods are working correctly, that all containers within pods are Ready and pod status is Running. Eventually check logs of suspicious pod $ kubectl logs <podname> -c <containerName>. In general you should check all pods the load balancer is pointing to,
Confirm if livenessProbe and readinessProbe are configured properly and response is 200,
Describe your ingress $ kubectl describe ingress <yourIngressName> and check backends,
Check if you've configured your health checks properly according to GKE Ingress for HTTP(S) Load Balancing - Health Checks guide.
If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).
Useful links:
kubernetes unhealthy ingress backend
GKE Ingress shows unhealthy backend services
In GKE you can define BackendConfig. To define custom health checks. you can configure this using the below link to make the ingress backend to be in a HEALTHY state.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health
If you have kubectl access to your pods, you can run kubectl get pod, and then kubctl logs -f <pod-name>.
Review the logs and find the error(s).
im goofin around with K8s for my master thesis at the moment. For this im spinning up an K8s Cluster with the help of KinD. I have also developed an small flask REST API which will echo en ENV var.
Now im starting 3 Services which hold a number of pods of the flask App and they are calling each other. For better understanding i have an hello svc, an world service and an world2 svc.
So far so good.
I have successfully deployed them and now i want to port-forward the hello svc.
kubectl --namespace test port-forward svc/hello 30000
This works fine but as soon as im starting my JMeter Application to test the load balancing features something odd happens.
As you can see in the grafana dashboard the other services are happily load balancing the traffic but the svc which is port forwarded is sending all of its traffic into one hello pod.
This is my deployment:
deployment.yml
Am i missing something? Or did i deploy my application wrong?
Thanks in advance!
Port-forward allows the use of services for convenience purposes only. Behind the scenes connects to a single pod directly. Connection will be dropped should this pod die. There is no load balancing in port forward.One pod selected by the service is chosen and all traffic is forwarded there for the entire lifetime of the port forward command.I would suggest using NodePort type service if you need to test load balancing via JMeter from outside the kubernetes cluster.
For all these who are interested i found a Workaround which is also closer to production.
First of all i installed MetalLB https://mauilion.dev/posts/kind-metallb/
With this LoadBalancer i declared an IP Range which ist the same as the one of my nodes.
Also the service which i am exposing received the type: LoadBalancer with this grafana is now showing an equal distribution of requests.
I am new to K8s and I am trying to migrate my service (which currently utilizes docker-compose.yml) to k8s. My service
deploys zipkin and elasticsearch
and these can be accessed at 'localhost:9411' and 'localhost:9200' respectively.
The most commonly used solution I found online was 'kompose' and I tried to run,
kompose up
2.
kompose convert
kubectl apply -f *****-deployment.yaml, ****-service.yaml
Once I finish this, I run kubectl get pods and I can see my deployments, but elasticsearch and zipkin are no more responsive on their respective localhost ports.
Ouput of 'kubectl get pods'
Output of 'docker ps'
Output of curl http://localhost:9200
Can someone tell me why this is happening and how to debug?
It is solved now; all I had to do was port forwarding.
kubectl port-forward zipkin-774cc77659-g929n 9411:9411
Thanks,
By default you service is exposed as ClusterIP, in this case your service will be accessible from within your cluster.
You can use port forwarding "With this connection in place you can use your local workstation to debug your application that is running in the pod" as described in the answer above.
Another approach is to use other "service types" like NodePort.
You can find more information here Publishing services (ServiceTypes)
I've been learning Kubernetes recently and just came across this small issue. For some sanity checks, here is the functionality of my grpc app running locally:
> docker run -p 8080:8080 -it olamai/simulation:0.0.1
< omitted logs >
> curl localhost:8080/v1/todo/all
{"api":"v1","toDos":[{}]}
So it works! All I want to do now is deploy it in Minikube and expose the port so I can make calls to it. My end goal is to deploy it to a GKE or Azure cluster and make calls to it from there (again, just to learn and get the hang of everything.)
Here is the yaml I'm using to deploy to minikube
And this is what I run to deploy it on minikube
> kubectl create -f deployment.yaml
I then run this to get the url
> minikube service sim-service --url
http://192.168.99.100:30588
But this is what happens when I make a call to it
> curl http://192.168.99.100:30588/v1/todo/all
curl: (7) Failed to connect to 192.168.99.100 port 30588: Connection refused
What am I doing wrong here?
EDIT: I figured it out, and you should be able to see the update in the linked file. I had pull policy set to Never so it was out of date 🤦
I have a new question now... I'm now able to just create the deployment in minikube (no NodePort) and still make calls to the api... shouldn't the deployment need a NodePort service to expose ports?
I checked your yaml file and it works just fine. But only I realized that you put 2 types for your services, LoadBalancer and also NodePort which is not needed.
As if you check from this documentation definition of LoadBalancer, you will see
LoadBalancer: Exposes the service externally using a cloud provider’s
load balancer. NodePort and ClusterIP services, to which the external
load balancer will route, are automatically created.
As an answer for your next question, you probably put type: LoadBalancer to your deployment yaml file, that's why you are able to see NodePort anyway.
If you put type: ClusterIP to your yaml, then service will be exposed only within cluster, and you won't able to reach to your service outside of cluster.
From same documentation:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this
value makes the service only reachable from within the cluster. This
is the default ServiceType
I have started with kubernetes and followed this link to get the response as they mentioned. I followed the exact steps but when I am trying to open the port I get the following error:
How to solve this issue? I have tried by adding the IP address and port in the Browser proxy.
Can anyone help me on this?
Here is the service image: my service image
List of pods: Kubectl Pods
List of kubectl deployments:Deployment List
I believe you're using the baremetal(simple laptop) to deploy your service.
If you have look at my-service it is in pending state and it is of type LoadBalancer. The type load balance is supported only for the cloud providers like aws,azure and google cloud. Hence you are not able to access anything.
I will suggest you to follow this tutorial here which allow you to deploy nginx as a pod and deploy a service around that and export that service as nodeport (without load balancer) to be able to access from outside.