I have a setup Metallb as LB with Nginx Ingress installed on K8S cluster.
I have read about session affinity and its significance but so far I do not have a clear picture.
How can I create a single service exposing multiple pods of the same application?
After creating the single service entry point, how to map the specific client IP to Pod abstracted by the service?
Is there any blog explaining this concept in terms of how the mapping between Client IP and POD is done in kubernetes?
But I do not see Client's IP in the YAML. Then, How is this service going to map the traffic to respective clients to its endpoints? this is the question I have.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
Main concept of Session Affinity is to redirect traffic from one client always to specific node. Please keep in mind that session affinity is a best-effort method and there are scenarios where it will fail due to pod restarts or network errors.
There are two main types of Session Affinity:
1) Based on Client IP
This option works well for scenario where there is only one client per IP. In this method you don't need Ingress/Proxy between K8s services and client.
Client IP should be static, because each time when client will change IP he will be redirected to another pod.
To enable the session affinity in kubernetes, we can add the following to the service definition.
service.spec.sessionAffinity: ClientIP
Because community provided proper manifest to use this method I will not duplicate.
2) Based on Cookies
It works when there are multiple clients from the same IP, because it´s stored at web browser level. This method require Ingress object. Steps to apply this method with more detailed information can be found here under Session affinity based on Cookie section.
Create NGINX controller deployment
Create NGINX service
Create Ingress
Redirect your public DNS name to the NGINX service public/external IP.
About mapping ClientIP and POD, according to Documentation
kube-proxy is responsible for SessionAffinity. One of Kube-Proxy job
is writing to IPtables, more details here so thats how it is
mapped.
Articles which might help with understanding Session Affinity:
https://sookocheff.com/post/kubernetes/building-stateful-services/
https://medium.com/#diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b
follow the service reference for session affinity
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
Related
I am using HAProxy as the ingress-controller in my GKE clusters. And exposing HAProxy service as LoadBalancer service(Internal).
Recently, I experienced an issue, where the HA-Proxy service changed its EXTERNAL-IP, and traffic stopped routing to HAProxy. This issue occurred multiple times on different days(now it has stopped). I had to manually add that new External-IP to the frontend of that Loadbalancer to allow traffic to HAProxy.
There were two pods running for HAProxy, and both had been running for days, and there was nothing in their logs. I assume it was something related to Service or GCP LB and not HAProxy itself.
I am afraid that I don't have any logs related to that.
I still don't know, what caused the service IP to change. As there were no recent changes, and the cluster and all services were running for many days properly, and suddenly this occurred.
Has anyone faced a similar issue earlier? Or what can I do to avoid such issue in future?
What could have caused the IP to change?
This is how my service is configured:
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
Found some logs:
Warning SyncLoadBalancerFailed 30m (x3570 over 13d) service-controller Error syncing load balancer: failed to ensure load balancer: googleapi: Error 409: IP_IN_USE_BY_ANOTHER_RESOURCE - IP '10.17.129.17' is already being used by another resource.
Normal EnsuringLoadBalancer 3m33s (x3576 over 13d) service-controller Ensuring load balancer
The Short answer is: External IP for the service are ephemeral.
Because HA-Proxy controller pods are recreated the HA-Proxy service is created with an ephemeral IP.
To avoid this issue, I would recommend using a static IP that you can reference in the loadBalancerIP field.
This can be done by following steps:
Reserve a static IP. (link)
Use this IP, to create a service (link)
Example YAML:
apiVersion: v1
kind: Service
metadata:
name: helloweb
labels:
app: hello
spec:
selector:
app: hello
tier: web
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "YOUR.IP.ADDRESS.HERE"
Unfortunately without logs it's hard to say anything for sure. You should check the audit logs that GKE ships to Cloud Logging as that might give you some idea of what happened. One option is the GCP "oops"'d the GLB and GKE recreated it, thus giving it a new IP. I've never heard of that happening with LBs though (it happens pretty often with nodes, but not LBs). A more common case would be you ran some kubectl command that inadvertently removed the Service object and then it was recreated by some management layer you have set up (Argo, Flux, Helm Operator, whatever) but delete+recreate again means it's a new LB with a new IP. The latter case should be visible in the audit logs so check those out for sure.
I'm trying to use a load balancer to expose a service I have running on an EKS pod. My service is defined in a yaml like this:
kind: Service
apiVersion: v1
metadata:
name: mlflow-server
namespace: default
labels:
app: mlflow-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: mlflow-server
ports:
- name: http
port: 88
targetPort: http
- name: https
port: 443
targetPort: https
This is to define a service for a pod that I have mlflow server running on. When I apply this and access the external IP generated for the service, I get a This site can’t be reached webpage error. Is there something I'm missing with exposing my service as a load balanced service to access the mlflow ui?
For a basic Loadbalancer type service you do not need the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb this creates the network load balancer. Now if you need it to be an NLB then there might be following problems:
The nlb takes few minutes to come up when you apply the setting. If you check it just after you deploy it it will not be able to accept the traffic. Please do check if the intended network loadbalancer is up in your AWS-EC2console > Loadbalancer tab.
The second problem that is more likely to happen is that the NLB is can be attached with only some instance types only. To check that you can go through the following link.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets
So if you actually do not have the need of network loadbalancer remove the annotation as the nlb has an higher charge as well. But, if that is the dire requirement do check with the second option if the instances that you are using on AWS are compatible with Network LoadBalancer.
I want to access my backend pods using an internal Kubernetes dns name. Instead of using http://somepodip:8080/get I want to use http://backend:8080/get to use my backend.
I am currently running my backend pods and have hooked them up to a service.
kind: Service
apiVersion: v1
metadata:
name: backend
spec:
selector:
app: myapp-backend
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
This does assign my pods to the backend service. But when I try to use a frontend pod with http://backend/get , it does not find the resource.
Am I incorrectly configuring the service?
Your service seems to be ok, the issue could be possibly because your frontend is not server rendered, which means that your browser is trying to lookup for a name backend, in that case you cannot rely on kubernetes service name as your browser does not recognize it as a valid hostname.
If you want to access externally by instead of ip, you want to use names, check how to setup a ingress entry https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.
Say for example below is a service for a broker
apiVersion: v1
kind: Service metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 9092
name: outside
targetPort: 9092
selector: app: kafka kafka-pod-id: "0"
What is port and targetPort?
Do I setup LoadBalancer service for each of the brokers?
Do these multiple brokers get mapped to single public IP address of cloud LB?
How does a service outside k8s/cloud access individual broker? By using public-ip:port? or by using kafka-<pod-id>.kafka.my.company.com:port?. Also which port is used here? port or targetPort?
How do I specify this configuration in Kafka broker's Advertised.listeners property? As port can be different for services inside k8s cluster and outside it.
Please help.
Based on the information you provided I will try give you some answers, eventually give some advise.
1) port: is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against port in the service spec file.
targetPort: is the port on the POD where the service is running. Your application needs to be listening for network requests on this port for the service to work.
2/3) Each Broker should be exposed as LoadBalancer and be configured as headless service for internal communication. There should be one addiational LoadBalancer with external ip for external connection.
Example of Service
apiVersion: v1
kind: Service
metadata:
name: kafka-0
annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
pod-name: kafka-0
type: LoadBalancer
4) You have to use kafka-<pod-id>.kafka.my.company.com:port
5) It should be set to the external addres so that clients can connect to it. This article might help with understanding.
Similar case was on Github, it might help you also - https://github.com/kow3ns/kubernetes-kafka/issues/3
In addition, You could also think about Ingress - https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/
Say I am running my app in GKE, and this is a multi-tenant application.
I create multiple Pods that hosts my application.
Now I want:
Customers 1-1000 to use Pod1
Customers 1001-2000 to use Pod2
etc.
If I have a gcloud global IP that points to my cluster, is it possible to route a request based on the incoming ipaddress/domain to the correct Pod that contains the customers data?
You can guarantee session affinity with services, but not as you are describing. So, your customers 1-1000 won't use pod-1, but they will use all the pods (as a service makes a simple load balancing), but each customer, when gets back to hit your service, will be redirected to the same pod.
Note: always within time specified in (default 10800):
service.spec.sessionAffinityConfig.clientIP.timeoutSeconds
This would be the yaml file of the service:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
If you want to specify time, as well, this is what needs to be added:
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10
Note that the example above would work hitting ClusterIP type service directly (which is quite uncommon) or with Loadbalancer type service, but won't with an Ingress behind NodePort type service. This is because with an Ingress, the requests come from many, randomly chosen source IP addresses.
Not with Pods by themselves, but you should be able to with Services.
Pods are intended to be stateless and indistinguishable from one another.
But you should be able to create a Deployment per customer group, and a Service per Deployment. The Ingress nginx should be able to be told to map incoming requests by whatever attributes are relevant to specific customer group Services.