Kubernetes service port forwarding to multiple pods - kubernetes

I have configured a kubernetes service and starts 2 replicas/pods.
Here's the service settings and description.
apiVersion: v1
kind: Service
metadata:
name: my-integration
spec:
type: NodePort
selector:
camel.apache.org/integration: my-integration
ports:
- protocol: UDP
port: 8444
targetPort: 8444
nodePort: 30006
kubectl describe service my-service
Detail:
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: camel.apache.org/integration=xxx
Type: NodePort
IP Families: <none>
IP: 10.xxx.16.235
IPs: 10.xxx.16.235
Port: <unset> 8444/UDP
TargetPort: 8444/UDP
NodePort: <unset> 30006/UDP
Endpoints: 10.xxx.1.39:8444,10.xxx.1.40:8444
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What I can see is 2 endpoints are listed, unfortunately only one pod receives events. When I kill the pod, the other one will start receiving events.
My goal is to make 2 pod receive events in Round Robin but don't know how to configure the traffic policies.

Related

AKS load balancer service accessible on wrong port

I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?
Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

Kubernetes service is configured but get connection refused

I have service configurd on my kuberntes cluster but when I try to curl ip:port I get connection refused
the following service configured :
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: production
spec:
type: NodePort
selector:
app: frontend
ports:
- name: control-center-web
port: 8008
protocol: TCP
targetPort: 8008
$ kubectl describe svc frontend
Name: frontend
Namespace: production
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"frontend","namespace":"production"},"spec":{"ports":[{"name":...
Selector: app=frontend
Type: NodePort
IP: <ip>
Port: control-center-web 8008/TCP
TargetPort: 8008/TCP
NodePort: control-center-web 7851/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 14m service-controller LoadBalancer -> NodePort
Why do I keep getting connection refuse for ip and port that I took from the svc?
The Endpoints in service has got None instead of IPs of the pods. This happens when the selector in service app: frontend does not match with the selector in pod spec,

Kubernetes services endpoints healtcheck

I've created a Kubernetes Service whose backend nodes aren't part of the Cluster but a fixed set of nodes (having fixed IPs), so I've also created an Endpoints resource with the same name:
apiVersion: v1
kind: Service
metadata:
name: elk-svc
spec:
ports:
- port: 9200
targetPort: 9200
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: elk-svc
subsets:
-
addresses:
- { ip: 172.21.0.40 }
- { ip: 172.21.0.41 }
- { ip: 172.21.0.42 }
ports:
- port: 9200
Description of Service and Endpoints:
$ kubectl describe svc elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"spec":{"ports":[{"port":9200,"protocol":"TCP"...
Selector: <none>
Type: ClusterIP
IP: 10.233.17.18
Port: <unset> 9200/TCP
Endpoints: 172.21.0.40:9200,172.21.0.41:9200,172.21.0.42:9200
Session Affinity: None
Events: <none>
$ kubectl describe ep elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"subsets":[{"addresses":[{"ip":"172.21.0.40"...
Subsets:
Addresses: 172.21.0.40,172.21.0.41,172.21.0.42
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 9200 TCP
Events: <none>
My pods are able to communicate with ElasticSearch using the internal cluster IP 10.233.17.18. Everything goes fine !
My question is about if there's any way to have some kind of healthCheck mechanism for that Service I've created so if one of my ElasticSearch nodes goes down, ie: 172.21.0.40, then the Service is aware of that and and will no longer route traffic to that node, but to the others. Is that possible ?
Thanks.
This is not supported in k8s.
For more clarification refer this issue raised on your requirement ::
https://github.com/kubernetes/kubernetes/issues/77738#issuecomment-491560980
For this use-case best practice would be to use a loadbalancer like haproxy
My suggestion will be to have a reverse proxy such as nginx or haproxy infront of the elastic nodes which will do health check for those nodes.

A and C class IP addresses in cluster

Why there are two IP address classes in my Kubernetes cluster?
kubectl describe svc cara
Name: cara
Namespace: default
Labels: app=cara
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cara"},"name":"cara","namespace":"default"},"spec":{"por...
Selector: app=cara
Type: NodePort
IP: 10.100.35.240
Port: cara 8000/TCP
TargetPort: cara/TCP
NodePort: cara 31614/TCP
Endpoints: 192.168.41.137:8000,192.168.50.89:8000
Port: vrde 6666/TCP
TargetPort: vrde/TCP
NodePort: vrde 30666/TCP
Endpoints: 192.168.41.137:6666,192.168.50.89:6666
Port: rdp 3389/TCP
TargetPort: rdp/TCP
NodePort: rdp 31490/TCP
Endpoints: 192.168.41.137:3389,192.168.50.89:3389
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
after master installation I did:
kubeadm init --v=0 --pod-network-cidr=192.167.0.0/16
kubectl apply --v=0 -f https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
I expect one range of IP addresses in my cluster network. Am I misunderstand something?
There are two main CIDRs in Kubernetes cluster - pod network and service network. It seems that your cluster has pod network 192.168.0.0/16 and service network 10.0.0.0/8.

Kubernetes is creating a nodeport service with incorrect port and it's not accessible

Name: abcxyz5
Namespace: diyclientapps
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"abcxyz5","namespace":"diyclientapps"},"spec":{"ports":[{"port":8080,"protocol"...
Selector: app=abcxyz5
Type: NodePort
IP: 10.97.214.209
Port: <unset> 8080/TCP
NodePort: <unset> 30097/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Here is my service definition:
apiVersion: v1
kind: Service
spec:
selector:
app: abcxyz5
ports:
- port: 8080
targetPort: 8123
protocol: TCP
type: NodePort
metadata:
name: abcxyz5
Why is it not using the targetPort value of 8123? None the less, I'm not able to access this service:
telnet 192.168.99.100 30097
Trying 192.168.99.100...
telnet: Unable to connect to remote host: Connection refused
I have referenced the IP returned from: minikube ip: 192.168.99.100.
The selector did not match any endpoint.
I modified it and now the describe service command shows this:
N
ame: abcxyz7
Namespace: diyclientapps
Labels: <none>
Annotations: <none>
Selector: appId=abcxyz7
Type: NodePort
IP: 10.99.201.118
Port: http 8080/TCP
NodePort: http 30385/TCP
Endpoints: 172.17.0.11:8080
Session Affinity: None
Events: <none>