Unable to connect to external load balancer even after exposing service in kubernetes - kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080

The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505

Related

Kubernetes service URL not responding to API call

I've been following multiple tutorials on how to deploy my (Spring Boot) api on Minikube. I already got it (user-service running on 8081) working in a docker container with an api gateway (port 8080) and eureka (port 8087), but for starters I just want it to run without those. Steps I took:
Push docker container or image (?) to docker hub, I don't know the proper term.
Create a deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: kwetter-service
spec:
type: LoadBalancer
selector:
app: kwetter
ports:
- protocol: TCP
port: 8080
targetPort: 8081
nodePort: 30070
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kwetter-deployment
labels:
app: kwetter
spec:
replicas: 1
selector:
matchLabels:
app: kwetter
template:
metadata:
labels:
app: kwetter
spec:
containers:
- name: user-api
image: cazhero/s6-kwetter-backend_user:latest
ports:
- containerPort: 8081 #is the port it runs on when I manually start it up
kubectl apply -f deployment.yaml
minikube service kwetter-service
It takes me to an empty site with url: http://192.168.49.2:30070 which I thought I could use to make API calls to, but apparently not. How do I make api calls to my application running on minikube?
Get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d4h
kwetter-service LoadBalancer 10.106.42.56 <pending> 8080:30070/TCP 4d
describe svc kwetter-service:
Name: kwetter-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kwetter
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.42.56
IPs: 10.106.42.56
Port: <unset> 8080/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30070/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 6s service-controller LoadBalancer -> NodePort
Made an Ingress in the yaml, used kubectl get ing:
NAME CLASS HOSTS ADDRESS PORTS AGE
kwetter-ingress <none> * 80 49m
To make some things clear:
You need to have pushed your docker image cazhero/s6-kwetter-backend_user:latest to docker hub, check that at https://hub.docker.com/, in your personal repository.
What's the output of minikube service kwetter-service, does it print the URL http://192.168.49.2:30070?
Make sure your pod is running correctly by the following minikube command:
# check pod status
minikube kubectl -- get pods
# if the pod is running, check its container logs
minikube kubectl -- logs po kwetter-deployment-xxxx-xxxx
I see that you are using LoadBalancer service, where a LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.
Check external IP
kubectl get svc
Use the external IP and the port number in this format to access the
application.
http://REPLACE_WITH_EXTERNAL_IP:8080
If you want to access the application using Nodeport (30070), use the Nodeport service instead of LoadBalancer service.
Refer to this documentation for more information on accessing applications through Nodeport and LoadBalancer services.

Unable to forward traffic using NodePort

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

How do I create a LoadBalancer service over Pods created by a ReplicaSet/Deployment

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

Why is my kubectl loadbalancer targeting to a random port?

I have a service and deployment kube config files like below.
Now, when i apply these two files, its creating a loadbalancer but its targeting to a random port but not port 80.
I'm a newbie to EKS and tried different kube config files but it still tries to target a random port.
service file:
apiVersion: v1
kind: Service
metadata:
name: runners-test
labels:
app: runners-test
spec:
ports:
- port: 80
targetPort: 80
selector:
app: runners-test
type: LoadBalancer
deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: runners-test
labels:
app: runners-test
spec:
replicas: 1
selector:
matchLabels:
app: runners-test
template:
metadata:
labels:
app: runners-test
spec:
containers:
- name: runners-test
image: mylocaldockerimage
ports:
- containerPort: 80
C02X67GOKL:terraform$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP > PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 8d
runners-test LoadBalancer 10.100.246.180 af3884a05ad7811e99b0e06a70e73221-192467907.us-west-2.elb.amazonaws.com 80:31038/TCP 43m
It's targeting to port a random port 31038, when i connect to my pod and run ps -ef, i can see that a service is running on port 80.
As mentioned in the Kubernetes Service documentation , setting this type will enforce the underlying cloud provider to assign a public IP address to your service and route the traffic on your exposed port ( which is 80 in your case ) to Node Port ( 31038 ) on the kubernetes cluster level.
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service.

digitalocean kubernetes loadbalancer

I have deployed my app on the limited available Kubernetes cluster on DigitalOcean.
I have a spring boot app with a service exposed on port 31744 for external using nodeport service config.
I created a Loadbalancer using the yaml config per DO link doc: https://www.digitalocean.com/docs/kubernetes/how-to/add-load-balancer/
However, I am not able to hook up to my service. Can you advise on how it can be done so I can access my service from the loadbalancer?
The following is my "kubectl get svc" output for my app service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-springboot NodePort 10.245.6.216 <none> 8080:31744/TCP 2d18h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d20h
sample-load-balancer LoadBalancer 10.245.53.168 58.183.251.550 80:30495/TCP 2m6s
The following is my loadbalancer.yaml:
apiVersion: v1
kind: Service
metadata:
name: sample-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 31744
name: http
My service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
Thanks
To expose your service using LoadBalancer instead of NodePort you need to provide type in service as LoadBalancer. So your new service config yaml will be:
apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
Once you apply the above service yaml file, you will get the external IP in kubectl get svc which can be used to access the service from outside the kubernetes cluster.