Kubernetes Inter-Node traffic is not working properly - kubernetes

lately I am configuring a k8s cluster composed of 3 nodes(master, worker1 and worker2) that will host an UDP application(8 replicas of it). Everything is done and the cluster is working very well but there is only one problem.
Basically there is a Deployment which describes the Pod and it looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <name>
labels:
app: <app_name>
spec:
replicas: 8
selector:
matchLabels:
app: <app_name>
template:
metadata:
labels:
app: <app_name>
spec:
containers:
- name: <name>
image: <image>
ports:
- containerPort: 6000
protocol: UDP
There is also a Service which is used to access to the UDP application:
apiVersion: v1
kind: Service
metadata:
name: <service_name>
labels:
app: <app_name>
spec:
type: NodePort
ports:
- port: 6000
protocol: UDP
nodePort: 30080
selector:
app: <app_name>
When i try to access to the service 2 different scenarios may occur:
The request is assigned to a POD that is in the same node that received the request
The request is assigned to a POD that is in the other node
In the second case the request arrives correctly to the POD but with a source IP which ends by 0 (for example 10.244.1.0) so the response will never be delivered correctly.
I can't figure it out, I really tried everything but this problem still remains. In this moment to make the cluster working properly i added externalTrafficPolicy: Local and internalTrafficPolicy: Local to the Service in this way the requests will remain locally so when a request is sent to worker1 it will be assigned to a Pod which is running on worker1, the same for the worker2.
Do you have any ideas about the problem?
Thanks to everyone.

Have you confirmed that the response is not delivered correctly for your second scenario? The source IP address in that case should be the one of the node where the request first arrived.
I am under the impression that you are assuming that since the IP address ends in 0 this is necessarily a network address, and that could be a wrong assumption, as it depends on the Netmask configured for the Subnetwork where the nodes are allocated; for example, if the nodes are in the Subnet 10.244.0.0/23, then the network address is 10.244.0.0, and 10.244.1.0 is just another usable address that can be assigned to a node.
Now, if your application needs to preserve the client's IP address, then that could be an issue since, by default, the source IP seen in the target container is not the original source IP of the client. In this case, additionally to configuring the externalTrafficPolicy as Local, you would need to configure a healthCheckNodePort as specified in the Preserving the client source IP documentation.

Related

HAProxy Ingress Controller Service Changed IP on GCP

I am using HAProxy as the ingress-controller in my GKE clusters. And exposing HAProxy service as LoadBalancer service(Internal).
Recently, I experienced an issue, where the HA-Proxy service changed its EXTERNAL-IP, and traffic stopped routing to HAProxy. This issue occurred multiple times on different days(now it has stopped). I had to manually add that new External-IP to the frontend of that Loadbalancer to allow traffic to HAProxy.
There were two pods running for HAProxy, and both had been running for days, and there was nothing in their logs. I assume it was something related to Service or GCP LB and not HAProxy itself.
I am afraid that I don't have any logs related to that.
I still don't know, what caused the service IP to change. As there were no recent changes, and the cluster and all services were running for many days properly, and suddenly this occurred.
Has anyone faced a similar issue earlier? Or what can I do to avoid such issue in future?
What could have caused the IP to change?
This is how my service is configured:
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
Found some logs:
Warning SyncLoadBalancerFailed 30m (x3570 over 13d) service-controller Error syncing load balancer: failed to ensure load balancer: googleapi: Error 409: IP_IN_USE_BY_ANOTHER_RESOURCE - IP '10.17.129.17' is already being used by another resource.
Normal EnsuringLoadBalancer 3m33s (x3576 over 13d) service-controller Ensuring load balancer
The Short answer is: External IP for the service are ephemeral.
Because HA-Proxy controller pods are recreated the HA-Proxy service is created with an ephemeral IP.
To avoid this issue, I would recommend using a static IP that you can reference in the loadBalancerIP field.
This can be done by following steps:
Reserve a static IP. (link)
Use this IP, to create a service (link)
Example YAML:
apiVersion: v1
kind: Service
metadata:
name: helloweb
labels:
app: hello
spec:
selector:
app: hello
tier: web
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Unfortunately without logs it's hard to say anything for sure. You should check the audit logs that GKE ships to Cloud Logging as that might give you some idea of what happened. One option is the GCP "oops"'d the GLB and GKE recreated it, thus giving it a new IP. I've never heard of that happening with LBs though (it happens pretty often with nodes, but not LBs). A more common case would be you ran some kubectl command that inadvertently removed the Service object and then it was recreated by some management layer you have set up (Argo, Flux, Helm Operator, whatever) but delete+recreate again means it's a new LB with a new IP. The latter case should be visible in the audit logs so check those out for sure.

Cannot connect to the external ip of the k8s service

I have the following service:
apiVersion: v1
kind: Service
metadata:
name: hedgehog
labels:
run: hedgehog
spec:
ports:
- port: 3000
protocol: TCP
name: restful
- port: 8982
protocol: TCP
name: websocket
selector:
run: hedgehog
externalIPs:
- 1.2.4.120
In which I have specified an externalIP.
I'm also seeing this IP under EXTERNAL-IP when running kubectl get services.
However, when I do curl http://1.2.4.120:3000 I get a timeout. However the app is supposed to give me a response because the jar running inside the container in the deployment does respond to localhost:3000 requests when run locally.
if you see the type of your service might be cluster IP try changing the type to LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
clusterIP: 172.30.163.110
externalIPs:
- 192.168.132.253
externalTrafficPolicy: Cluster
ports:
- name: highport
nodePort: 31903
port: 30102
protocol: TCP
targetPort: 30102
selector:
app: web
sessionAffinity: None
type: LoadBalancer
something like this where type: LoadBalancer.
First of all you have to understand you cannot place any random address in your ExternalIP field. Those addresses are not managed by Kubernetes and are the responsibility of the cluster administrator or you. External IP addresses specified with externalIPs are different than the external IP address assigned to a service of type LoadBalancer by a cloud provider.
I checked the address that you mentioned in the question and it does not look like it belongs to you. That why I suspect that you placed a random one there.
The same address appears in this article about ExternalIP. As you can see here the address in this case are the IP addresses of the nodes that Kubernetes runs on.
This is potential issue in your case.
Another one is too verify if your application is listening on localhost or 0.0.0.0. If it's really localhost then this might be another potential problem for you. You can change where the server process is listening. You do this by listening on 0.0.0.0, which means “listen on all interfaces”.
Lastly please verify that your selector/ports of the services are correct and that you have at least one endpoint that backs your service.

How to Route to specific pod through Kubernetes Service (like a Gateway API)

I am running Kubernetes on "Docker Desktop" in Windows.
I have a LoadBalancer Service for a deployment which has 3 replicas.
I would like to access SPECIFIC pod through some means (such as via URL path : < serviceIP >:8090/pod1).
Is there any way to achieve this usecase?
deployment.yaml :
apiVersion: v1
kind: Service
metadata:
name: my-service1
labels:
app: stream
spec:
ports:
- port: 8090
targetPort: 8090
name: port8090
selector:
app: stream
# clusterIP: None
type: LoadBalancer
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: stream-deployment
labels:
app: stream
spec:
replicas: 3
selector:
matchLabels:
app: stream
strategy:
type: Recreate
template:
metadata:
labels:
app: stream
spec:
containers:
- image: stream-server-mock:latest
name: stream-server-mock
imagePullPolicy: Never
env:
- name: STREAMER_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STREAMER_ADDRESS
value: stream-server-mock:8090
ports:
- containerPort: 8090
My end goal is to achieve horizontal auto-scaling of pods.
How Application designed/and works as of now (without kubernetes) :
There are 3 components : REST-Server, Stream-Server (3 instances
locally on different JVM on different ports), and RabbitMQ.
1 - The client sends a request to "REST-Server" for a stream url.
2 - The REST-Server puts in the RabbitMQ queue.
3 - One of the Stream-Server picks it up and populates its IP and sends back to REST-Server through RabbitMQ.
4 - The client receives the IP and establishes a direct WS connection using the IP.
The Problem what I face is :
1 - When the client requests for a stream IP, one of the pods (lets say POD1) picks it up and sends its URL (which is service URL, comes through LoadBalancer Service).
2 - Next time when the client tries to connect (WebSocket Connection) using the Service IP, it wont be the same pod which accepted the request.
It should be the same pod which accepted the request, and must be accessible by the client.
You can use StatefulSets if you are not required to use deployment.
For replica 3, you will have 3 pods named
stream-deployment-0
stream-deployment-1
stream-deployment-2
You can access each pod as $(podname).$(service name).$(namespace).svc.cluster.local
For details, check this
You may also want to set up an ingress to point each pod from outside of the cluster.
As mentioned by aerokite, you can use StatefulSets. However, if you don't want to modify your deployments, you can simply use Headless Services. As specified in the documentation:
For headless Services, a cluster IP is not allocated.
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service.
This means that whenever you query the DNS name for your Service (i.e. my-svc.my-namespace.svc.cluster-domain.example), what you get is a list of all the Pod IPs (unlike regular services where you get the cluster IP). You can then select your Pods using your own mechanisms.
Regarding your new question, if that is your only issue, you can use session affinity. If you set service.spec.sessionAffinity to ClientIP, then connections from a particular client will always go to the same Pod each time. You don't need other modifications like the headless Services mentioned above.
IMO, the only way to achieve this will be:
Instead of using a deployment with 3 replicas, use 3 deployments with 1 replicas each (or just create pods only); deployment1 -> pod1, deployment2 -> pod2, deployment3 -> pod3
Expose all the deployments on a separate service, service1 -> deployment1, service2 -> deployment2, service3 -> deployment3
Create an ingress resource and route to each pod using the service for each deployment. For example:
ingress-url/service1
ingress-url/service2
ingress-url/service3

readiness probe fails with connection refused

I am trying to setup K8S to work with two Windows Nodes (2019). Everything seems to be working well and the containers are working and accessible using k8s service. But, once I introduce configuration for readiness (or liveness) probes - all fails. The exact error is:
Readiness probe failed: Get http://10.244.1.28:80/test.txt: dial tcp 10.244.1.28:80: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
When I try the url from k8s master, it works well and I get 200. However I read that the kubelet is the one executing the probe and indeed when trying from the Windows Node - it cannot be reached (which seems weird because the container is running on that same node). Therefore I assume that the problem is related to some network configuration.
I have a HyperV with External network Virtual Switch configured. K8S is configured to use flannel overlay (vxlan) as instructed here: https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies.
Any idea how to troubleshoot and fix this?
UPDATE: providing the yaml:
apiVersion: v1
kind: Service
metadata:
name: dummywebapplication
labels:
app: dummywebapplication
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
app: dummywebapplication
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dummywebapplication
name: dummywebapplication
spec:
replicas: 2
template:
metadata:
labels:
app: dummywebapplication
name: dummywebapplication
spec:
containers:
- name: dummywebapplication
image: <my image>
readinessProbe:
httpGet:
path: /test.txt
port: 80
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 60
nodeSelector:
beta.kubernetes.io/os: windows
And one more update. In this doc (https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/) it is written:
My Windows node cannot access NodePort service
Local NodePort access from the node itself fails. This is a known
limitation. NodePort access works from other nodes or external
clients.
I don't know if this is related or not as I could not connect to the container from a different node as stated above. I also tried a service of LoadBalancer type but it didn't provide a different result.
The network configuration assumption was correct. It seems that for 'overlay', by default, the kubelet on the node cannot reach the IP of the container. So it keeps returning timeouts and connection refused messages.
Possible workarounds:
Insert an 'exception' into the ExceptionList 'OutBoundNAT' of C:\k\cni\config on the nodes. This is somewhat tricky if you start the node with start.ps1 because it overwrites this file everytime. I had to tweak 'Update-CNIConfig' function in c:\k\helper.psm1 to re-insert the exception similar to the 'l2bridge' in that file.
Use 'l2bridge' configuration. Seems like 'overlay' is running in a more secured isolation, but l2bridge is not.

Redirection Stateful pod in Kubernetes

I am using stateful in kubernetes.
I write an application which will have leader and follower (using Go)
Leader is for writing and reading.
Follower is just for reading.
In the application code, I used "http.Redirect(w, r, url, 307)" function to redirect the writing from follower to leader.
if I use a jump pod to test the application (try to access the app to read and write), my application can work well, the follower can redirect to the leader
kubectl run -i -t --rm jumpod --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- sh
But when I deploy a service (to access from outside).
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And access to application by this link:
curl -L -XPUT -T /tmp/put-name localhost:8001/api/v1/namespaces/default/services/service-name/proxy/
Because service will randomly select a pod each time we access to the application. When it accesses to leader, it work well (no problem happen), But if it accesses to the follower, the follower will need to redirect to leader, I met this error:
curl: (6) Could not resolve host: pod-name.svc-name.default.svc.cluster.local
What I tested:
I can use this link to access when I used jump pod
I accessed to per pod, look up the DNS. I can find the DNS name by "nslookup" command
I tried to fix the IP of leader in my code. In my code, the follower will redirect to a IP (not a domain like above). But It still met this error:
curl: (7) Failed to connect to 10.244.1.71 port 9876: No route to host
Anybody know this problem. Thank you!
In order for pod DNS to work you must create headless service:
apiVersion: v1
kind: Service
metadata:
name: svc-name-headless
spec:
clusterIP: None
selector:
app: app-name
ports:
- port: 80
targetPort: 9876
And then in StatefulSet spec you must refer to this service:
spec:
serviceName: svc-name-headless
Read more here: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
Alternatively you may specify what particular pod will be selected by Service like that:
apiVersion: v1
kind: Service
metadata:
name: svc-name
spec:
selector:
statefulset.kubernetes.io/pod-name: pod-name-0
ports:
- port: 80
targetPort: 9876
When you are accessing your cluster services from outside, a DNS names reserved normally for inner cluster use (e.g. pod-name.svc-name.default.svc.cluster.local) are not recognized by clients (e.g. Web browser).
Context:
You are trying to expose access to PODs, controlled by StatefullSet with service of ClusterIP type
Solution:
Change ClusterIP (assumed by default when not specified) to NodePort or LoadBalancer