Service is incorrectly selecting Pod listening on some different port - kubernetes

I tried the Service definition example from here.
So, I created below Service:
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
And then to test the concept, I created below Pod:
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-container-1
image: nginx:alpine
ports:
- containerPort: 9376
And I can see that a new Endpoint for this Pod is created, so all good till now, below is the output:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.8:9376
Session Affinity: None
Events: <none>
Then to test negative concept, I created below Pod.
apiVersion: v1
kind: Pod
metadata:
name: service-simple-service-pod-nouse
labels:
app: MyApp
spec:
containers:
- name: service-simple-service-pod-nouse-container-1
image: nginx:alpine
ports:
- containerPort: 9378
But to my surprise this Pod was also picked:
C:\Users>kubectl describe service/service-simple-service
Name: service-simple-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=MyApp
Type: ClusterIP
IP: 10.98.246.70
Port: <unset> 80/TCP
TargetPort: 9376/TCP
Endpoints: 10.244.0.10:9376,10.244.0.8:9376
Session Affinity: None
Events: <none>
My understanding of Service I created above was that Scheduler will look for any Pod having label as app: MyApp and running on port 9376, so my expectation was that since this Pod is running on port 9378 so it will not be picked up. So, my question is that why this "service-simple-service-pod-nouse" was picked up?
If someone says that my understanding was incorrect and Service only selects Pod based on Label, then my question is that since "service-simple-service-pod-nouse" Pod is listening on port 9378 then how "service-simple-service" Service can send traffic to this Pod?

Sevice will picked all the pods that are labeled as the label selector of that service. service-simple-service service will select all the pods that are labeled as MyApp because you tell in the service selector (app: MyApp). This is the common and expected behavior of label-selector, you can see the k8s official doc
apiVersion: v1
kind: Service
metadata:
name: service-simple-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Update
Basically, a service get the requests and then it serves the traffic to the pods (those are labeled as the service selector), when a service take a pod then it opens a endpoint for that pod, when traffic comes to the service it sends those traffics in one of it endpoints(which is basically going to a pod). And the container port is basically the port inside the pod where the container is running.

Related

Service Endpoint not created although container port is online

I have a simple Service that connects to a port from a container inside a pod.
All pretty straight forward.
This was working too but out of nothing, the endpoint is not created for port 18080.
So I began to investigate and looked at this question but nothing that helped there.
The container is up, no errors/events, all green.
I can also call the request with the pods ip:18080 from an internal container, so the endpoint should be reachable for the service.
I can't see errors in:
journalctl -u snap.microk8s.daemon-*
I am using microk8s v1.20.
Where else can I debug this situation?
I am out of tools.
Service:
kind: Service
apiVersion: v1
metadata:
name: aedi-service
spec:
selector:
app: server
ports:
- name: aedi-host-ws #-port
port: 51056
protocol: TCP
targetPort: host-ws-port
- name: aedi-http
port: 18080
protocol: TCP
targetPort: fcs-http
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
srv: os-port-mapping
name: dns-service
spec:
hostname: fcs
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
Service Description:
Name: aedi-service
Namespace: fcs-only
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fcs-only
meta.helm.sh/release-namespace: fcs-only
Selector: app=server
Type: ClusterIP
IP Families: <none>
IP: 10.152.183.247
IPs: 10.152.183.247
Port: aedi-host-ws 51056/TCP
TargetPort: host-ws-port/TCP
Endpoints: 10.1.116.70:51056
Port: aedi-http 18080/TCP
TargetPort: fcs-http/TCP
Endpoints:
Session Affinity: None
Events: <none>
Pod Info:
NAME READY STATUS RESTARTS AGE LABELS
server-deployment-76b5789754-q48xl 6/6 Running 0 23m app=server,name=dns-service,pod-template-hash=76b5789754,srv=os-port-mapping
kubectl get svc aedi-service -o wide:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
aedi-service ClusterIP 10.152.183.247 <none> 443/TCP,1884/TCP,51052/TCP,51051/TCP,51053/TCP,51056/TCP,18080/TCP,51055/TCP 34m app=server
Your service spec refer to a port named "fcs-http" but it was not declared in the deployment. Try:
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
...
ports:
- containerPort: 18080
name: fcs-http # <-- add the name here
...
Wrong service configuration
- name: aedi-http
port: 18080 -----> which expose service, it has not related with container port.
protocol: TCP
targetPort: fcs-http -----> Here should be 18080, correspond to container port
If you still want to use name instead of port number, you should define name too in deployment yaml, like below:
containers:
- name: fcs
image: '{{$fcsImage}}'
imagePullPolicy: {{$pullPolicy}}
ports:
- containerPort: 18080
name: fcs-http

Kubernetes deployment not publicly accesible

im trying to access a deloyment on our Kubernetes cluster on Azure. This is a Azure Kubernetes Service (AKS). Here are the configuration files for the deployment and the service that should expose the deployment.
Configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: mira-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mira-api
template:
metadata:
labels:
app: mira-api
spec:
containers:
- name: backend
image: registry.gitlab.com/izit/mira-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
imagePullSecrets:
- name: regcred
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
run: mira-api
When I check the cluster after applying these configurations I, I see the pod running correctly. Also the service is created and has public IP assigned.
After this deployment I don't see any requests getting handled. I get a error message in my browser saying the site is inaccessible. Any ideas what I could have configured wrong?
Your service selector labels and pod labels do not match.
You have app: mira-api label in deployment's pod template but have run: mira-api in service's label selector.
Change your service selector label to match the pod label as follows.
apiVersion: v1
kind: Service
metadata:
name: mira-api-service
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: mira-api
To make sure your service is selecting the backend pods or not, you can run kubectl describe svc <svc name> command and check if it has any Endpoints listed.
# kubectl describe svc postgres
Name: postgres
Namespace: default
Labels: app=postgres
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"postgres"},"name":"postgres","namespace":"default"},"s...
Selector: app=postgres
Type: ClusterIP
IP: 10.106.7.183
Port: default 5432/TCP
TargetPort: 5432/TCP
Endpoints: 10.244.2.117:5432 <------- This line
Session Affinity: None
Events: <none>

How do I create a LoadBalancer service over Pods created by a ReplicaSet/Deployment

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

Access Minikube Loadbalancer Service From Host Machine

I am trying to learn how to use Kibernetes with Minikube and have the following deployment and service:
---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: tutum/hello-world
ports:
- containerPort: 8080
I expect to be able to hit this service from my local machine at
http://192.168.64.2:30002
As per the command: minikube service exampleservice --url but when I try to access this from the browser I get a site cannot be reached error.
Some information that may help debugging:
kubectl get services --all-namespaces:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default exampleservice LoadBalancer 10.104.248.158 <pending> 8081:30002/TCP 26m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
default user-service-service LoadBalancer 10.110.181.202 <pending> 8080:30001/TCP 42m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2h
kube-system kubernetes-dashboard ClusterIP 10.110.65.24 <none> 80/TCP 2h
I am running minikube on OSX.
This is expected.
Do note that LoadBalancer is for cloud to create external load balancer like ALP/NLP in AWS and something similar in GCP/Azure etc.
Update the service as shown here. here i assume 192.168.64.2 is your minikube ip. if not, update it with minikube ip to make it work.
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
externalIPs:
- 192.168.64.2
Now you can access your application at http://192.168.64.2:8081/
If you need to access the application at 30002, you can use it like this
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: NodePort
Your deployment file does not look correct to me.
delete it
kubectl delete deploy/myappdeployment
use this to create again.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: myapp
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
strategy: {}
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: tutum/hello-world
name: myapp
ports:
- containerPort: 80
NOTE: Minikube support LoadBalancer services (via minikube tunnel)
you can get the IP and port through which you
can access the service by running
minikube service kubia-http #=> To open a browser with an IP and port
OR
minikube service kubia --url #=> To get the IP and port in the terminal

Kubernetes Load Balancer Type not responding to External IP Address

I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing.
This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.
Is there something wrong with the way my service is defined?
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
Output of kubectl describe service web (name of service:)
Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
The reason behind that is your service has two selector app: hmweb and tier: frontend and your deployment pods has only single label named app: hmweb. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on 8000 port then you must define targetPort which has the value of container port on which container is running, else it will take both targetPort and port value as same you defined in your service i.e. port: 80
The correct yaml for your deployment is:
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
Hope this helps.