Unable to forward traffic using NodePort - kubernetes

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api

You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000

Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

Related

AKS load balancer service accessible on wrong port

I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?
Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

Service is incorrectly selecting Pod listening on some different port

I tried the Service definition example from here.
So, I created below Service:
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
And then to test the concept, I created below Pod:
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-container-1
image: nginx:alpine
ports:
- containerPort: 9376
And I can see that a new Endpoint for this Pod is created, so all good till now, below is the output:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.8:9376
Session Affinity: None
Events: <none>
Then to test negative concept, I created below Pod.
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod-nouse
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-nouse-container-1
image: nginx:alpine
ports:
- containerPort: 9378
But to my surprise this Pod was also picked:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.10:9376,10.244.0.8:9376
Session Affinity: None
Events: <none>
My understanding of Service I created above was that Scheduler will look for any Pod having label as app: MyApp and running on port 9376, so my expectation was that since this Pod is running on port 9378 so it will not be picked up. So, my question is that why this "service-simple-service-pod-nouse" was picked up?
If someone says that my understanding was incorrect and Service only selects Pod based on Label, then my question is that since "service-simple-service-pod-nouse" Pod is listening on port 9378 then how "service-simple-service" Service can send traffic to this Pod?
Sevice will picked all the pods that are labeled as the label selector of that service. service-simple-service service will select all the pods that are labeled as MyApp because you tell in the service selector (app: MyApp). This is the common and expected behavior of label-selector, you can see the k8s official doc
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Update
Basically, a service get the requests and then it serves the traffic to the pods (those are labeled as the service selector), when a service take a pod then it opens a endpoint for that pod, when traffic comes to the service it sends those traffics in one of it endpoints(which is basically going to a pod). And the container port is basically the port inside the pod where the container is running.

How do I create a LoadBalancer service over Pods created by a ReplicaSet/Deployment

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

digitalocean kubernetes loadbalancer

I have deployed my app on the limited available Kubernetes cluster on DigitalOcean.
I have a spring boot app with a service exposed on port 31744 for external using nodeport service config.
I created a Loadbalancer using the yaml config per DO link doc: https://www.digitalocean.com/docs/kubernetes/how-to/add-load-balancer/
However, I am not able to hook up to my service. Can you advise on how it can be done so I can access my service from the loadbalancer?
The following is my "kubectl get svc" output for my app service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-springboot NodePort 10.245.6.216 <none> 8080:31744/TCP 2d18h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d20h
sample-load-balancer LoadBalancer 10.245.53.168 58.183.251.550 80:30495/TCP 2m6s
The following is my loadbalancer.yaml:
apiVersion: v1
kind: Service
metadata:
name: sample-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 31744
name: http
My service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
Thanks
To expose your service using LoadBalancer instead of NodePort you need to provide type in service as LoadBalancer. So your new service config yaml will be:
apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
Once you apply the above service yaml file, you will get the external IP in kubectl get svc which can be used to access the service from outside the kubernetes cluster.

Unable to connect to external load balancer even after exposing service in kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080
The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505