How to set session affinity between inner servicess inside a Kubernetes cluster - kubernetes

Here is my problem: I have 3 services defined in a kubernetes yaml file:
one front-end (website)
one back-end : stateful, for user sessions
one back-end : stateless
I need session affinity on the stateful service, but not on the stateless nor front-end service. I need the session affinity to be cookie-based, not clientIP based.
mydomain/stateful ===> Front-End Service (3 pods) ===> Stateful Service (3 pods, need session affinity)
mydomain/stateless ===> Front-End Service (3 pods) ===> Stateless Service (3 pods, do not need session affinity)
I tried to use Ingress service, but I fail to see how I can use it as a proxy in-between 2 services inside the Kubernetes Cluster. All the examples I see show how to use Ingress as a router for request coming from outside the Cluster.
Here is my poc.yaml so far:
####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
#sessionAffinity: ClientIP
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: LoadBalancer
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 85
targetPort: http
Do you know how to solve my problem?
Thanks!

Natively Kubernetes itself does not provide session affinity on service [concept] level.
The only way that comes to my mind is to use Istio and it's Destination Rules. Taken from the istio manual:
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.
This document shows how to to configure sticky session with istio.

Related

how to connect 2 webapps in 2 nodeport service?

I have a single node k8s cluster with 2 web applications running on 2 NGINX k8s pods.
nginx-deployment1 --> WEBAPP1 --> nginx-svc-app1 --> <K8s_controler_IP>:30080/webapp1
nginx-deployment2 --> WEBAPP2 --> nginx-svc-app2 --> <K8s_controler_IP>:30081/webapp2
Its connecting only to the respective nodeport ip but not connecting to <K8s_controler_IP>:30080/webapp1 and <K8s_controler_IP>:30081/webapp2. Could you please help me understand what am i missing?
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment1
labels:
name: nginx-app1
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app1
template:
metadata:
labels:
name: nginx-app1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app1
labels:
name: nginx-svc-app1
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: app1_port
selector:
name: nginx-app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment2
labels:
name: nginx-app2
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app2
template:
metadata:
labels:
name: nginx-app2
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app2
labels:
name: nginx-svc-app2
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
name: http
selector:
name: nginx-app2
Your port configurations are incorrect. You need to set targetPort: 80 for both services. When you use the NodePort service type and specify the NodePort, the port field is irrelevant. That is the port that typically receives the incoming traffic for the service endpoint. So, specifying both would mean:
Receive incoming traffic on an endpoint with port 80.
Receive incoming traffic to Node port 30080.
And you are not specifying a targetPort which is the port that receives the traffic in the pods of the deployment. So, the traffic is coming in, but not being forwarded anywhere.
You need to add targetPort: 80 to both services and remove the port parameter.
Additionally, you're not running 2 pods, but you are running 2 deployments, each with 3 pods (replicas) in them. And the traffic you will send to each service, will be 'distributed' by the service, on all the pods that will match the selector - i.e. all 3 of your pods. It's important to understand how the communication takes place.
Also, k8s_controller_ip is an incorrect way of describing the IP address. You should use the term k8s Node IP or k8s_master_node_ip. The IP address belongs to the node that is running the cluster, not to any controller.

How does gateway connect other services in Kubernetes?

I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local

How to contact a random pod for one service (mutli instance of the same pod) in the same node in kubernetes

I have one single node. On this node, I have 2 applications with multi instance (3 pods each)
App A want contact App B.
My issue is : App A contact all the time the same pod of App B.
I would like alternate pod (load balancing in round robin for example)
For example :
first request: AppPod3 respond
second request: AppPod1 respond
third request: AppPod2 respond
How i can do that ?
Thank you so much for your help ...
You can see below my conf of app B
I have tried to set timeoutSeconds for sessionaffinity but it's not working ...
kind: Deployment
metadata:
name: AppB
spec:
selector:
matchLabels:
app: AppB
replicas: 3
template:
metadata:
labels:
app: AppB
spec:
containers:
- name: AppB-container
image: image
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: AppB
labels:
app: svc-AppB
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: AppB
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1```
The service should use backend pods in round robin fashion by default. You don't need sessionAffinity settings if the pods are stateless; otherwise you will be redirected to the same pod based on the source ID.
Maybe you can add logging to the pods and observe when they are accessed. Subsequent calls to the service should be redirected to pods in round robin fashion with minimal service configuration.
Update: this is the deployment I am using. It balances the pods as expected; each curl request sent to the service clusterip:port ends on a different pod. My k8s installation is on premise, v1.18.3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: baseDeployment
labels:
app: baseApp
namespace: nm-app1
spec:
replicas: 2
selector:
matchLabels:
app: baseApp
template:
metadata:
labels:
app: baseApp
spec:
containers:
- name: baseApp
image: local-registry:5000/baseApp
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: baseService
namespace: nm-app1
spec:
selector:
app: baseApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
#user3683760 you will have to apply load balancing strategies in your deployment. Kube has bunch of choices for balancing the traffic between your services. I would suggest play around with few patterns and see what best suit your needs.

How to use Istio Virtual Service in-between front and back services

I am totally new to Istio, and the pitch looks very exciting. However, I can't make it work, which probably means I don't use it properly.
My goal is to implement session affinity between 2 services, this is why originally I end up using Istio. However, I do a very basic test, and it does not seem to work:
I have an kubernetes demo app which as a front service, a stateful-service, and a stateless service. From a browser, I access the front-service which dispatches the request either on the stateful or stateless services, using K8s service names as url: http://api-stateful or http://api-stateless.
I want to declare a virtual service to intercept requests sent from the front-service to the stateful-service.I don't declare it as a gateway, as I understood gateway was at the external border of the K8s cluster.
I use Docker on Windows with Istio 1.6.
I copy my yaml file below. the basic test I want to do: reroute traffic for api-stateful to api-stateless, to validate that the Virtual Service is taken into account. And it does not work. Do you see what is wrong? Is it a wrong usage of Virtual Service ? My Kiali console does not detect any problem in setup.
####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 3500
targetPort: http
nodePort: 30000
---
##############################################################
############ ISTIO PROXY FOR API-STATEFUL SERVIC E############
##############################################################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-stateful-proxy
spec:
hosts:
- api-stateful
http:
- route:
- destination:
host: api-stateless
As mentioned in comments this can be fixed with DestinationRule with sticky session configuration.
Example of that can be found in istio documentation:
LoadBalancerSettings
Load balancing policies to apply for a specific destination. See Envoy’s load balancing documentation for more details.
For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s

Handling long Pod response times in a Service

Given the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: server
selector:
app: nginx
How would one configure the Service and Deployment here (or if needed, an Ingress object) so that when a Pod takes more than n seconds to return a HTTP response, the Service will try the request on another nginx-deployment Pod?
Kubernetes Services are based on simple iptables rules.
Traffic is NAT'ed only to destination pod. There are no layers you can adjust, for example, timeouts and set quality of services based on it.