AKS load balancer service accessible on wrong port - kubernetes

I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?

Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

Related

Service is incorrectly selecting Pod listening on some different port

I tried the Service definition example from here.
So, I created below Service:
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
And then to test the concept, I created below Pod:
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-container-1
image: nginx:alpine
ports:
- containerPort: 9376
And I can see that a new Endpoint for this Pod is created, so all good till now, below is the output:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.8:9376
Session Affinity: None
Events: <none>
Then to test negative concept, I created below Pod.
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod-nouse
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-nouse-container-1
image: nginx:alpine
ports:
- containerPort: 9378
But to my surprise this Pod was also picked:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.10:9376,10.244.0.8:9376
Session Affinity: None
Events: <none>
My understanding of Service I created above was that Scheduler will look for any Pod having label as app: MyApp and running on port 9376, so my expectation was that since this Pod is running on port 9378 so it will not be picked up. So, my question is that why this "service-simple-service-pod-nouse" was picked up?
If someone says that my understanding was incorrect and Service only selects Pod based on Label, then my question is that since "service-simple-service-pod-nouse" Pod is listening on port 9378 then how "service-simple-service" Service can send traffic to this Pod?
Sevice will picked all the pods that are labeled as the label selector of that service. service-simple-service service will select all the pods that are labeled as MyApp because you tell in the service selector (app: MyApp). This is the common and expected behavior of label-selector, you can see the k8s official doc
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Update
Basically, a service get the requests and then it serves the traffic to the pods (those are labeled as the service selector), when a service take a pod then it opens a endpoint for that pod, when traffic comes to the service it sends those traffics in one of it endpoints(which is basically going to a pod). And the container port is basically the port inside the pod where the container is running.

Unable to forward traffic using NodePort

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

How to deploy custom nginx app on kubernetes?

I want to deploy a custom nginx app on my kubernetes cluster.
I have three raspberry in a cluster. My deplotment file looks as follows
kubepodDeploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: privateRepo/my-nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx
How can I deploy it so that I can access my app by ipadress. Which service type do I need?
my service details are:
kubectl describe service my-nginx ~/Project/htmlBasic
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: Selector: run=my-nginx
Type: NodePort
IP: 10.99.107.194
Port: http 8080/TCP
TargetPort: 80/TCP
NodePort: http 30488/TCP
Endpoints: 10.32.0.4:80,10.32.0.5:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32430/TCP
Endpoints: 10.32.0.4:443,10.32.0.5:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You can not access the application on ipaddress:8080 without using a proxy server in front or changing IPtable rules(not good idea). NodePort service type will always expose service in port range 30000-32767
So at any point, your service will be running on ipaddress:some_higher_port
Running a proxy in front which, redirects the traffic to node port and since 8080 is your requirement so run proxy server also on 8080 port.
Just to add proxy server will not be part of Kubernetes cluster
If you are on cloud consider using LoadBalancer service and access you app on some DNS name.

How do I create a LoadBalancer service over Pods created by a ReplicaSet/Deployment

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

Access Minikube Loadbalancer Service From Host Machine

I am trying to learn how to use Kibernetes with Minikube and have the following deployment and service:
---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: tutum/hello-world
ports:
- containerPort: 8080
I expect to be able to hit this service from my local machine at
http://192.168.64.2:30002
As per the command: minikube service exampleservice --url but when I try to access this from the browser I get a site cannot be reached error.
Some information that may help debugging:
kubectl get services --all-namespaces:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default exampleservice LoadBalancer 10.104.248.158 <pending> 8081:30002/TCP 26m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
default user-service-service LoadBalancer 10.110.181.202 <pending> 8080:30001/TCP 42m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2h
kube-system kubernetes-dashboard ClusterIP 10.110.65.24 <none> 80/TCP 2h
I am running minikube on OSX.
This is expected.
Do note that LoadBalancer is for cloud to create external load balancer like ALP/NLP in AWS and something similar in GCP/Azure etc.
Update the service as shown here. here i assume 192.168.64.2 is your minikube ip. if not, update it with minikube ip to make it work.
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
externalIPs:
- 192.168.64.2
Now you can access your application at http://192.168.64.2:8081/
If you need to access the application at 30002, you can use it like this
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: NodePort
Your deployment file does not look correct to me.
delete it
kubectl delete deploy/myappdeployment
use this to create again.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: myapp
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
strategy: {}
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: tutum/hello-world
name: myapp
ports:
- containerPort: 80
NOTE: Minikube support LoadBalancer services (via minikube tunnel)
you can get the IP and port through which you
can access the service by running
minikube service kubia-http #=> To open a browser with an IP and port
OR
minikube service kubia --url #=> To get the IP and port in the terminal