As per my understanding
Ingress maintains set of rules like which path to which service.
IngressController handles the request to service based on rules specified in Ingress.
Service is pointing to several pods by label selectors. The request is handled to any of the pod under the service.
Now I want to add sticky session to one of the Service “auth-service”. I want requests from one user to reach same pod.
My Service is clusterIP.
IngressController is AWS Load balancer.
If my understanding is correct, I should add sticky session to service.(is it correct).
But when I google, different places show i can add to Ingress or IngressController.
I want to know where to apply the sticky session and how to do it?
If it is to be applied on IngressController or Ingress, how service will use that to route the req to the POD?
Thanks in advance.
You should add sticky session annotations to Ingress Resource
once we configure session affinity for our ingress, it will respond to the request with set-cookie header set by the ingress-controller. The randomly generated cookie id will be mapped to served pod so that flow of all requests will serve by the same server
You also need to set proxy protocol at LB to capture the client ip and annotations at ingress.
Related
I have built two services in k8s cluster, how can they interact with each other, if I want to make http request from one service to another, I know I can’t use local host, but how can I know the host when I am coding.
Service objects are automatically exposed in DNS as <servicename>.<namespace>.svc.<clusterdomain> where clusterdomain is usually cluster.local. The default resolv.conf allows for relative lookups so if the service is in the same namespace you can use just the name, otherwise <servicename>.<namespace>.
I have a pretty strange use-case in which we need to have multiple concurrent connections to the same ReplicaSet but each one of them needs to go to a different instance.
Here's an image to summarize it:
Is there any way to set a label based on the ordinal index of an instance in the replica set?
In that way I could map different ports to different instances.
Thanks in advance
Assuming http based cookies something that are looking for you can use ingress controller and so called sticky sessions / session affinity.
The ingress controller replies the response with a Set-Cookie header
to the first request. The value of the cookie will map to a specific
pod replica. When the subsequent request come back again, the client
browser will attach the cookie and the ingress controller is therefore
able to route the traffic to the same pod replica.
Basically by setting appropriate annotations in the ingress object, you will make sure the subsequent traffic request will still be sent to the same pod.
Nginx has a good example and documentation about those.
Alternatively you can also use traefik as ingress controllers. It also supports sitcky session (they`re just defined not at ingress level but traefik's ingressroute).
How do I ping my api which is running in kubernetes environment in other namespace rather than default. Lets say I have pods running in 3 namespaces - default, dev, prod. I have ingress load balancer installed and configured the routing. I have no problem in accessing default namespace - https://localhost/myendpoint.... But how do I access the apis that are running different image versions in other namespaces eg dev or prod? Do I need to add additional configuration in service or ingress-service files?
EDIT: my pods are restful apis that communicates over http requests. All I’m asking how to access my pod which runs in other namespace rather than default. The deployments communicate between each other with no problem. Let’s say I have a front end application running and want to access it from the browser, how is it done? I can access if the pods are in the default namespace by hitting http://localhost/path... but if I delete all the pods from default namespace and move all the services and deoloyments into dev namespace, I cannot access it anymore from the browser with the same url. Does it have a specific path for different namespaces like http://localhost/dev/path? Do I need to cinfigure it
Hopefully it's clear enough. Thank you
Route traffic with Ingress to Service
When you want to route request from external clients, via Ingress to a Service, you should put the Ingress and Service object in the same namespace. I recommend to use different domains in your Ingress for the environments.
Route traffic from Service to Service
When you want to route traffic from a pod in your cluster to a Service, possible in another namespace, it is easiest to use Service Discovery with DNS, e.g. send request to:
<service-name>.<namespace>.svc.<configured-cluster-name>.<configured-name>
this is most likely
<service-name>.<namespace>.svc.cluster.local
I am trying to deploy an application with load-balancing within kubernetes
below are my intended deployment diagram
ideally, the application is deployed by a set of pods using k8s deployment with type of "backend"
normally, user instances are stored in the archive. and are restored into one of the pods dynamically upon request, stay there for a TTL time (say 30 minutes), and deleted and backuped into the archive.
ideally, the load balance is deployed by a set of pods using k8s deployment with type of "frontend".
ideally, the frontend is configured as layer7 session sticky with "sticky = host". the host equals the UID of a backend pod
an user requests the service by a SOAP message, which contains parameters "host" and "user" in its body.
when a SOAP message reaches the frontend, the "host" value is extracted from the message body.
if the "host" value is valid, the SOAP message is forwarded to the corresponding backend pod (whose UID equals the host value). otherwise, a random backend pod is assigned.
(processing here upon is application specific)
In a backend pod, the application checks the availability of the user instance by the value of "user".
if already existed, just use it; otherwise, try to restore from the archive; if restoring failed(new user), create a new user instance.
I searched around, and did not find any similar examples.
especially layer7 session sticky configuration, and the implementation of custom acquiring of sticky value from the incoming message body.
This sounds like a use-case where you are doing authentication through the front-end loadbalancer. Have you looked at Istio and Ambassador. Seems like Istio and Envoy could provide the service mesh to route the requests to the pods. Then you would have to write a custom plugin module into Ambassador to create this specific routing and authentication mechanism that you are seeking.
Example of Ambassador custom authentication service: https://www.getambassador.io/user-guide/auth-tutorial
https://www.getambassador.io/user-guide/with-istio
This custom sticky session routing can also be done using other API gateways but still using Istio for routing to the different pods. However it would be best if the pods are defined as separate services in order to have easier segmentation by the API gateway (Ambassador, Kong, Nginx) based on the parameters of the message body.
Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.