Exposing kubernetes ingress to host machine running KinD on Windows and WSL2 - kubernetes

I'm using KinD on Windows via Docker Destop running on WSL2 and trying to set up the ingress to expose port on my host machine.
I followed the guide and installed the cluster with the config as shown here: https://kind.sigs.k8s.io/docs/user/ingress/, along with Ingress NGINX controller (from helm) and a custom Ingress ressource redirecting to my service with the proper classname.
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 5180
protocol: TCP
- containerPort: 443
hostPort: 51443
protocol: TCP
If I kubectl port-forward to the ingress service, it works fine and I can access service website.
But I'm unable to access directly it via the hostPort setting set on the cluster config without port-forward (http://localhost:5180). KinD doesn't seem to attribute an external-ip to the ingress controller, it remains in <Pending> state.
Any idea why and how to diagnose further?
Thanks!
pod/ingress-nginx-controller-6bf7bc7f94-2r74v 1/1 Running 0 15h
service/ingress-nginx-controller LoadBalancer 10.96.1.208 <pending> 80:30674/TCP,443:30800/TCP 15h
service/ingress-nginx-controller-admission ClusterIP 10.96.103.184 <none> 443/TCP 15h
my-ingress nginx * 80 16h

Hmm. I dont know about KinD but as a general principle, ingress creates public load balancer which has external ip (globaly available). Also in the configuration of ingress (vanilla/nginx ingress), you would have to configure endpoints (paths). Also try to check this stuff: https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting

Related

ambassador service stays "pending"

Currently running a fresh "all in one VM" (stacked master/worker approach) kubernetes v1.21.1-00 on Ubuntu Server 20 LTS, using
cri-o as container runtime interface
calico for networking/security
also installed the kubernetes-dashboard (but I guess that's not important for my issue šŸ˜‰). Taking this guide for installing ambassador: https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/ I come along the issue that the service is stuck in status "pending".
kubectl get svc -n ambassador prints out the following stuff
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.97.117.249 <pending> 80:30925/TCP,443:32259/TCP 5h
ambassador-admin ClusterIP 10.101.161.169 <none> 8877/TCP,8005/TCP 5h
ambassador-redis ClusterIP 10.110.32.231 <none> 6379/TCP 5h
quote ClusterIP 10.104.150.137 <none> 80/TCP 5h
While changing the type from LoadBalancer to NodePort in the service sets it up correctly, I'm not sure of the implications coming along. Again, I want to use ambassador as an ingress component here - with my setup (only one machine), "real" loadbalancing might not be necessary.
For covering all the subdomain stuff, I setup a wildcard recording for pointing to my machine, means I got a CNAME for *.k8s.my-domain.com which points to this host. Don't know, if this approach was that smart for setting up an ingress.
Edit: List of events, as requested below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 116s default-scheduler Successfully assigned ambassador/ambassador-redis-584cd89b45-js5nw to dev-bvpl-099
Normal Pulled 116s kubelet Container image "redis:5.0.1" already present on machine
Normal Created 116s kubelet Created container redis
Normal Started 116s kubelet Started container redis
Additionally, here's the service pending in yaml presenation (exported via kubectl get svc -n ambassador -o yaml ambassador)
apiVersion: v1
kind: Service
metadata:
annotations:
a8r.io/bugs: https://github.com/datawire/ambassador/issues
a8r.io/chat: http://a8r.io/Slack
a8r.io/dependencies: ambassador-redis.ambassador
a8r.io/description: The Ambassador Edge Stack goes beyond traditional API Gateways
and Ingress Controllers with the advanced edge features needed to support developer
self-service and full-cycle development.
a8r.io/documentation: https://www.getambassador.io/docs/edge-stack/latest/
a8r.io/owner: Ambassador Labs
a8r.io/repository: github.com/datawire/ambassador
a8r.io/support: https://www.getambassador.io/about-us/support/
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"a8r.io/bugs":"https://github.com/datawire/ambassador/issues","a8r.io/chat":"http://a8r.io/Slack","a8r.io/dependencies":"ambassador-redis.ambassador","a8r.io/description":"The Ambassador Edge Stack goes beyond traditional API Gateways and Ingress Controllers with the advanced edge features needed to support developer self-service and full-cycle development.","a8r.io/documentation":"https://www.getambassador.io/docs/edge-stack/latest/","a8r.io/owner":"Ambassador Labs","a8r.io/repository":"github.com/datawire/ambassador","a8r.io/support":"https://www.getambassador.io/about-us/support/"},"labels":{"app.kubernetes.io/component":"ambassador-service","product":"aes"},"name":"ambassador","namespace":"ambassador"},"spec":{"ports":[{"name":"http","port":80,"targetPort":8080},{"name":"https","port":443,"targetPort":8443}],"selector":{"service":"ambassador"},"type":"LoadBalancer"}}
creationTimestamp: "2021-05-22T07:18:23Z"
labels:
app.kubernetes.io/component: ambassador-service
product: aes
name: ambassador
namespace: ambassador
resourceVersion: "4986406"
uid: 68e4582c-be6d-460c-909e-dfc0ad84ae7a
spec:
clusterIP: 10.107.194.191
clusterIPs:
- 10.107.194.191
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: 32542
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32420
port: 443
protocol: TCP
targetPort: 8443
selector:
service: ambassador
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
EDIT#2: I wonder, if https://stackoverflow.com/a/44112285/667183 applies for my process as well?
Answer is pretty much here: https://serverfault.com/questions/1064313/ambassador-service-stays-pending . After installing a load balancer the whole setup worked. I decided to go with metallb (https://metallb.universe.tf/installation/#installation-by-manifest for installation). I decided to go with the following configuration for a single-node kubernetes cluster:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.16.0.99-10.16.0.99
After a few seconds the load balancer is detected and everything goes fine.

Kubernetes - Curl a Cluster-IP Service

I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.

use prometheus with external ip address

we have k8s cluster and Iā€™ve application which is running there.
Now I try to add https://prometheus.io/
and I use the command
helm install stable/prometheus --version 6.7.4 --name my-prometheus
this command works and I got this
NAME: my-prometheus
LAST DEPLOYED: Tue Feb 5 15:21:46 2019
NAMESPACE: default
STATUS: DEPLOYED
...
when I run command
kubectl get services
I got this
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 2d4h
my-prometheus-alertmanager ClusterIP 100.75.244.55 <none> 80/TCP 8m44s
my-prometheus-kube-state-metrics ClusterIP None <none> 80/TCP 8m43s
my-prometheus-node-exporter ClusterIP None <none> 9100/TCP 8m43s
my-prometheus-pushgateway ClusterIP 100.75.24.67 <none> 9091/TCP 8m43s
my-prometheus-server ClusterIP 100.33.26.206 <none> 80/TCP 8m43s
I didnt get any externalIP
Does someone knows how to add it ? via service? any example for this
update
iā€™ve added the following yml
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
spec:
selector:
app: prometheus-server
type: LoadBalancer
ports:
- port: 8080
targetPort: 9090
nodePort: 30001
which created successfully
now I see the external ip like when running kubectl get services
my-prometheus-server LoadBalancer 100.33.26.206 8080:30001/TCP 80/TCP 8m43s
And I use in the browser 100.33.26.206:30001 and nothing happen, any idea?
I think what you are trying to do is to create a service with a type LoadBalancer, those have an internal and external IP.
You can create one like any other service but you should precise those two fields:
externalTrafficPolicy: Local
type: LoadBalancer
Updated:
There seems to be some confusion, you don't need an external ip to monitor your apps, it will only be used to access prometheus UI.
The UI is accessible on port 9090 but prometheus is never accessed by the exporter as it will be prometheus wich will be scraping the exporters.
Now to access a service from the internet you should have a google ip, but it seems that what you have is still an internal IP, it's in the same subnet as the other clusterIP, and it should not. For now in place of an external ip it's showing a port redirect wich is also wrong as the prometheus UI is on port 9090 (if you didn't modify your configuration it should still be). You should try to remove the "nodePort" and leave the port redirect to kubernetes.
The Prometheus helm chart does support configuration for service, see the documentation
To configure Prometheus server on a local cluster, follow the steps:
Create values.yaml:
server:
service:
servicePort: 31000
type: LoadBalancer
loadBalancerIP: localhost
or
server:
service:
nodePort: 31000
type: NodePort
Add stable repo to helm (if missing):
helm repo add stable "https://kubernetes-charts.storage.googleapis.com/"
Install Prometheus:
helm install prometheus-demo stable/prometheus --values .\values.yaml
Wait for 1-2mins. Prometheus should be available: http://localhost:31000/

Configuring Istio, Kubernetes and MetalLB to use a Istio LoadBalancer

Iā€™m struggling with the last step of a configuration using MetalLB, Kubernetes, Istio on a bare-metal instance, and that is to have a web page returned from a service to the outside world via an Istio VirtualService route. Iā€™ve just updated the instance to
MetalLB (version 0.7.3)
Kubernetes (version 1.12.2)
Istio (version 1.0.3)
Iā€™ll start with what does work.
All complementary services have been deployed and most are working:
Kubernetes Dashboard on http://localhost:8001
Prometheus Dashboard on http://localhost:10010 (I had something else on 9009)
Envoy Admin on http://localhost:15000
Grafana (Istio Dashboard) on http://localhost:3000
Jaeger on http://localhost:16686
I say most because since the upgrade to Istio 1.0.3 I've lost the telemetry from istio-ingressgateway in the Jaeger dashboard and I'm not sure how to bring it back. I've dropped the pod and re-created to no-avail.
Outside of that, MetalLB and K8S appear to be working fine and the load-balancer is configured correctly (using ARP).
kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.109.247.149 <none> 3000/TCP 9d
istio-citadel ClusterIP 10.110.129.92 <none> 8060/TCP,9093/TCP 28d
istio-egressgateway ClusterIP 10.99.39.29 <none> 80/TCP,443/TCP 28d
istio-galley ClusterIP 10.98.219.217 <none> 443/TCP,9093/TCP 28d
istio-ingressgateway LoadBalancer 10.108.175.231 192.168.1.191 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30805/TCP,8060:32514/TCP,853:30601/TCP,15030:31159/TCP,15031:31838/TCP 28d
istio-pilot ClusterIP 10.97.248.195 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 28d
istio-policy ClusterIP 10.98.133.209 <none> 9091/TCP,15004/TCP,9093/TCP 28d
istio-sidecar-injector ClusterIP 10.102.158.147 <none> 443/TCP 28d
istio-telemetry ClusterIP 10.103.141.244 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 28d
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP,5778/TCP 27h
jaeger-collector ClusterIP 10.104.66.65 <none> 14267/TCP,14268/TCP,9411/TCP 27h
jaeger-query LoadBalancer 10.97.70.76 192.168.1.193 80:30516/TCP 27h
prometheus ClusterIP 10.105.176.245 <none> 9090/TCP 28d
zipkin ClusterIP None <none> 9411/TCP 27h
I can expose my deployment using:
kubectl expose deployment enrich-dev --type=LoadBalancer --name=enrich-expose
it all works perfectly fine and I can hit the webpage from the external load balanced IP address (I deleted the exposed service after this).
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
enrich-expose LoadBalancer 10.108.43.157 192.168.1.192 31380:30170/TCP 73s
enrich-service ClusterIP 10.98.163.217 <none> 80/TCP 57m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36d
If I create a K8S Service in the default namespace (I've tried multiple)
apiVersion: v1
kind: Service
metadata:
name: enrich-service
labels:
run: enrich-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: enrich
followed by a gateway and a route (VirtualService), the only response I get is a 404 outside of the mesh. You'll see in the gateway I'm using the reserved word mesh but I've tried both that and naming the specific gateway. I've also tried different match prefixes for specific URI and the port you can see below.
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: enrich-dev-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: enrich-virtualservice
spec:
hosts:
- "enrich-service.default"
gateways:
- mesh
http:
- match:
- port: 80
route:
- destination:
host: enrich-service.default
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: enrich-destination
spec:
host: enrich-service.default
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: v1
labels:
app: enrich
I've double checked it's not the DNS playing up because I can go into the shell of the ingress-gateway either via busybox or using the K8S dashboard
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/shell/istio-system/istio-ingressgateway-6bbdd58f8c-glzvx/?namespace=istio-system
and do both an
nslookup enrich-service.default
and
curl -f http://enrich-service.default/
and both work successfully, so I know the ingress-gateway pod can see those. The sidecars are set for auto-injection in both the default namespace and the istio-system namespace.
The logs for the ingress-gateway show the 404:
[2018-11-01T03:07:54.351Z] "GET /metadataHTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "6c1796be-0791-4a07-ac0a-5fb07bc3818c" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:43500
[2018-11-01T03:26:39.339Z] "GET /HTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "ed956af4-77b0-46e6-bd26-c153e29837d7" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:53960
192.168.224.168:80 is the IP address of the gateway.
192.168.1.90:53960 is the IP address of my external client.
Any suggestions, I've tried hitting this from multiple angles for a couple of days now and I feel I'm just missing something simple. Suggested logs to look at perhaps?
Just to close this question out for the solution to the problem in my instance. The mistake in configuration started all the way back in the Kubernetes cluster initialisation. I had applied:
kubeadm init --pod-network-cidr=n.n.n.n/n --apiserver-advertise-address 0.0.0.0
the pod-network-cidr using the same address range as the local LAN on which the Kubernetes installation was deployed i.e. the desktop for the Ubuntu host used the same IP subnet as what I'd assigned the container network.
For the most part, everything operated fine as detailed above, until the Istio proxy was trying to route packets from an external load-balancer IP address to an internal IP address which happened to be on the same subnet. Project Calico with Kubernetes seemed to be able to cope with it as that's effectively Layer 3/4 policy but Istio had a problem with it a L7 (even though it was sitting on Calico underneath).
The solution was to tear down my entire Kubernetes deployment. I was paranoid and went so far as to uninstall Kubernetes and deploy again and redeploy with a pod network in the 172 range which wasn't anything to do with my local lan. I also made the same changes in the Project Calico configuration file to match pod networks. After that change, everything worked as expected.
I suspect that in a more public configuration where your cluster was directly attached to a BGP router as opposed to using MetalLB with an L2 configuration as a subset of your LAN wouldn't exhibit this issue either. I've documented it more in this post:
Microservices: .Net, Linux, Kubernetes and Istio make a powerful combination

Kubernetes + metallb + traefik: how to get real client ip?

traefik.toml:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[entryPoints.https.forwardedHeaders]
trustedIPs = ["0.0.0.0/0"]
[api]
traefik Service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: http
- protocol: TCP
port: 443
name: https
type: LoadBalancer
Then:
kubectl run source-ip-app --image=k8s.gcr.io/echoserver:1.4
deployment "source-ip-app" created
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
service "clusterip" exposed
kubectl get svc clusterip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.5.55.102 <none> 80/TCP 2h
Create ingress for clusterip:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: clusterip-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: clusterip.staging
http:
paths:
- backend:
serviceName: clusterip
servicePort: 80
clusterip.staging ip: 192.168.0.69
From other pc with ip: 192.168.0.100:
wget -qO - clusterip.staging
and get results:
CLIENT VALUES:
client_address=10.5.65.74
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://clusterip.staging:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3
cache-control=max-age=0
host=clusterip.staging
upgrade-insecure-requests=1
x-forwarded-for=10.5.64.0
x-forwarded-host=clusterip.staging
x-forwarded-port=443
x-forwarded-proto=https
x-forwarded-server=traefik-ingress-controller-755cc56458-t8q9k
x-real-ip=10.5.64.0
BODY:
-no body in request-
kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default clusterip NodePort 10.5.55.102 <none> 80:31169/TCP 19h
default kubernetes ClusterIP 10.5.0.1 <none> 443/TCP 22d
kube-system kube-dns ClusterIP 10.5.0.3 <none> 53/UDP,53/TCP 22d
kube-system kubernetes-dashboard ClusterIP 10.5.5.51 <none> 443/TCP 22d
kube-system traefik-ingress-service LoadBalancer 10.5.2.37 192.168.0.69 80:32745/TCP,443:30219/TCP 1d
kube-system traefik-web-ui NodePort 10.5.60.5 <none> 80:30487/TCP 7d
How to get real ip (192.168.0.100) in my installation? Why x-real-ip 10.5.64.0? I could not find the answers in the documentation.
When kube-proxy uses the iptables mode, it uses NAT to send data to the node where payload works, and you lose the original SourceIP address in that case.
As I understood, you use Matallb behind the Traefik Ingress Service (because its type is LoadBalancer). That means traffic from the client to the backend goes that way:
Client -> Metallb -> Traefik LB -> Traefik Service -> Backend pod.
Traefik works correctly and adds headers x-*, including x-forwarded-for and x-real-ip which contain a fake address, and that's why:
From the Metallb documentation:
MetalLB understands the serviceā€™s externalTrafficPolicy option and implements different announcements modes depending on the policy and announcement protocol you select.
Layer2
This policy results in uniform traffic distribution across all pods in the service. However, kube-proxy will obscure the source IP address of the connection when it does load-balancing, so your pod logs will show that external traffic appears to be coming from the clusterā€™s leader node.
BGP
ā€œClusterā€ traffic policy
With the default Cluster traffic policy, every node in your cluster will attract traffic for the service IP. On each node, the traffic is subjected to a second layer of load-balancing (provided by kube-proxy), which directs the traffic to individual pods.
......
The other downside of the ā€œClusterā€ policy is that kube-proxy will obscure the source IP address of the connection when it does its load-balancing, so your pod logs will show that external traffic appears to be coming from your clusterā€™s nodes.
ā€œLocalā€ traffic policy
With the Local traffic policy, nodes will only attract traffic if they are running one or more of the serviceā€™s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by kube-proxy, there is no ā€œhorizontalā€ traffic flow between nodes.
This policy provides the most efficient flow of traffic to your service. Furthermore, because kube-proxy doesnā€™t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.
Finally, the only way to get the real source IP address is to use "Local" mode of TrafficPolicy.
If you set it up, you will get what you want.