Kubernetes: The proxy server is refusing connections - kubernetes

I have started with kubernetes and followed this link to get the response as they mentioned. I followed the exact steps but when I am trying to open the port I get the following error:
How to solve this issue? I have tried by adding the IP address and port in the Browser proxy.
Can anyone help me on this?
Here is the service image: my service image
List of pods: Kubectl Pods
List of kubectl deployments:Deployment List

I believe you're using the baremetal(simple laptop) to deploy your service.
If you have look at my-service it is in pending state and it is of type LoadBalancer. The type load balance is supported only for the cloud providers like aws,azure and google cloud. Hence you are not able to access anything.
I will suggest you to follow this tutorial here which allow you to deploy nginx as a pod and deploy a service around that and export that service as nodeport (without load balancer) to be able to access from outside.

Related

kubernetes logs for service or deployment

I am having real trouble understanding how I am suppose to debug my current situation. I have followed the setup instructions from https://docs.substra.org/en/stable/contributing/getting-started.html#
There is a backend service which was created as a ClusterIP, and therefore can not be accessed from the host.
I created a load balancer for this purpose. using the command
kubectl expose deployment deployment_name --port=8000 --target-port=8000 \
--name=lb_service --type=LoadBalancer
However, the attempt to access the backend service failed when I use the LoadBalancer Ingress ip and NodePort port with a connection timeout. I like to see the relevant logs to check where the problem occurred. However, apparently kubectl logs service only shows logs for pods, whereas the load balancer, according to the kubectl expose command is attached to the deployment. Therefore, I am not able to see any logs related either to the load balancer service, or the deployment component.
When I looked at the pod which is supposed to be hosting the deployment, the log showed no error.
Can someone point out where do I look for logs that can debug this failed connectivity?
You probably need to look at the ingress logs, se this page from the documentation: https://kubernetes.github.io/ingress-nginx/troubleshooting/.
it is true that you can only get logs from pods. However, that is sufficient to see the relevant error messages.

services “kubernetes-dashboard” , can't access kubernetes ui

I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml

Deploying the Dashboard UI Error in Kubernetes [duplicate]

I am deploy kubernetes UI using this command:
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
And it response "Unable to connect to the server: dial tcp 185.199.110.133:443: i/o timeout"
I behind proxy, how can i fix it?
All the services that you deployed via the supplied url don't have a kind specified. This means they will be using the default service type which is ClusterIP.
Services of Kind ClusterIP are only accessible from inside your Kubernetes Cluster.
If you want the Dashboard to be accessible from outside your Cluster, you will need a service of type NodePort. A NodePort Service will assign a random high number port on all your nodes on which your application, in this case the k8s dashboard, will be accessible via ${ip-of-any-node}:${assigned-nodeport}.
For more information, please take a look at the official k8s documentation.
If your cluster is behind a proxy, also make sure, that you can reach your clusters node's external ip from wherever you are trying to send the request from.
In order to find out what port number has been assigned to your NodePort service use kubectl describe service ${servicename} or kubectl get service ${servicename} -o yaml

Kubernetes service with port-forward does not load balance

im goofin around with K8s for my master thesis at the moment. For this im spinning up an K8s Cluster with the help of KinD. I have also developed an small flask REST API which will echo en ENV var.
Now im starting 3 Services which hold a number of pods of the flask App and they are calling each other. For better understanding i have an hello svc, an world service and an world2 svc.
So far so good.
I have successfully deployed them and now i want to port-forward the hello svc.
kubectl --namespace test port-forward svc/hello 30000
This works fine but as soon as im starting my JMeter Application to test the load balancing features something odd happens.
As you can see in the grafana dashboard the other services are happily load balancing the traffic but the svc which is port forwarded is sending all of its traffic into one hello pod.
This is my deployment:
deployment.yml
Am i missing something? Or did i deploy my application wrong?
Thanks in advance!
Port-forward allows the use of services for convenience purposes only. Behind the scenes connects to a single pod directly. Connection will be dropped should this pod die. There is no load balancing in port forward.One pod selected by the service is chosen and all traffic is forwarded there for the entire lifetime of the port forward command.I would suggest using NodePort type service if you need to test load balancing via JMeter from outside the kubernetes cluster.
For all these who are interested i found a Workaround which is also closer to production.
First of all i installed MetalLB https://mauilion.dev/posts/kind-metallb/
With this LoadBalancer i declared an IP Range which ist the same as the one of my nodes.
Also the service which i am exposing received the type: LoadBalancer with this grafana is now showing an equal distribution of requests.

Why can't access my gRPC REST service that is running in Minikube?

I've been learning Kubernetes recently and just came across this small issue. For some sanity checks, here is the functionality of my grpc app running locally:
> docker run -p 8080:8080 -it olamai/simulation:0.0.1
< omitted logs >
> curl localhost:8080/v1/todo/all
{"api":"v1","toDos":[{}]}
So it works! All I want to do now is deploy it in Minikube and expose the port so I can make calls to it. My end goal is to deploy it to a GKE or Azure cluster and make calls to it from there (again, just to learn and get the hang of everything.)
Here is the yaml I'm using to deploy to minikube
And this is what I run to deploy it on minikube
> kubectl create -f deployment.yaml
I then run this to get the url
> minikube service sim-service --url
http://192.168.99.100:30588
But this is what happens when I make a call to it
> curl http://192.168.99.100:30588/v1/todo/all
curl: (7) Failed to connect to 192.168.99.100 port 30588: Connection refused
What am I doing wrong here?
EDIT: I figured it out, and you should be able to see the update in the linked file. I had pull policy set to Never so it was out of date 🤦
I have a new question now... I'm now able to just create the deployment in minikube (no NodePort) and still make calls to the api... shouldn't the deployment need a NodePort service to expose ports?
I checked your yaml file and it works just fine. But only I realized that you put 2 types for your services, LoadBalancer and also NodePort which is not needed.
As if you check from this documentation definition of LoadBalancer, you will see
LoadBalancer: Exposes the service externally using a cloud provider’s
load balancer. NodePort and ClusterIP services, to which the external
load balancer will route, are automatically created.
As an answer for your next question, you probably put type: LoadBalancer to your deployment yaml file, that's why you are able to see NodePort anyway.
If you put type: ClusterIP to your yaml, then service will be exposed only within cluster, and you won't able to reach to your service outside of cluster.
From same documentation:
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this
value makes the service only reachable from within the cluster. This
is the default ServiceType