sticky sessions kubernetes service with apache and ingress - kubernetes

I have an k8s Apache server which uses mod_jk to connect to different kubernetes services. mod_jk is sort of dummy here as workers are having k8s service URL. I am now trying to achieve sticky sessions based on JESSIONID in the cookie. I have traefik ingress controller which directs all requests to k8s Apache service. The requests is terminated at ingress level for TLS. What is the best way to achieve sticky session ?
I tried enabling sessionAffinity: ClientIP, but the client IP is always same. It is the ingress controller.

In the Kubernetes ingress object in the annotations label you have to define, which kind of Ingress will you use, so in your case: Traefik ingress controller. Please note, that the sticky session in Traefik is defined in the Service object with the annotation. Instead of a random cookie name, we define it as JSESSIONID.
Ingress object definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
labels:
app: session-affinity
name: session-affinity
spec:
tls:
- host: <address>
secretName:
rules:
- host: <address>
http:
paths:
- path: /
backend:
serviceName: session-affinity
servicePort: 8080
Service object definition:
apiVersion: v1
kind: Service
metadata:
name: session-affinity
labels:
app: session-affinity
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
traefik.ingress.kubernetes.io/session-cookie-name: "JSESSIONID"
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
You can find more information in documentation.

Related

EKS Ingress with Single ALB, multiple namespaces, and External DNS

I'm trying to configure a single ALB across multiple namespaces in aws EKS, each namespace has its own ingress resource.
I'm trying to configure the ingress controller aws-loadbalancer-controller on a k8s v1.20.
The problem i'm facing is that each time I try to deploy a new service it always spin-up a new classic loadbalancer in addition to the shared ALB specified in the ingress config.
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/
# service-realm1-dev.yaml:
apiVersion: v1
kind: Service
metadata:
name: sentinel
annotations:
external-dns.alpha.kubernetes.io/hostname: realm1.dev.sentinel.mysite.io
namespace: realm1-dev
labels:
run: sentinel
spec:
ports:
- port: 5001
name: ps1
protocol: TCP
selector:
app: sentinel
type: LoadBalancer
# ingress realm1-app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/success-codes: 200-300
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
name: sentinel-ingress-controller
namespace: realm1-dev
spec:
rules:
- host: realm1.dev.sentinel.mysite.io
http:
paths:
- path: /
pathType: Prefix
backend:
servicePort: use-annotation
serviceName: sentinel
Also I'm using external dns to create a route53 reecodset, and then I use the same configured DNS to route requests to the specific eks service, is there any issue with this approach ?
I was able to make it work using only one ALB,
#YYashwanth, Using Nginx was my fallback plan, I'm trying to make the configuration as simple as possible, maybe in the future when we will try to deploy our solution in other cloud providers we will use nginx ingress controller.
1- To start the service type should be node port, use loadbalancer will create a classic LB.
apiVersion: v1
kind: Service
metadata:
name: sentinel-srv
annotations:
external-dns.alpha.kubernetes.io/hostname: operatorv2.dev.sentinel.mysite.io
namespace: operatorv2-dev
labels:
run: jsflow-sentinel
spec:
ports:
- port: 80
targetPort: 80
name: ps1
protocol: TCP
selector:
app: sentinel-app
type: NodePort
2- Second we need to configure group.name, for the ingress controller to merge all ingress configurations using the annotation alb.ingress.kubernetes.io/group.name
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80} ]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
external-dns.alpha.kubernetes.io/hostname: operatorv2.sentinel.mysite.io
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-group
name: dev-operatorv2-sentinel-ingress-controller
namespace: operatorv2-dev
spec:
rules:
- host: operatorv2.dev.sentinel.mysite.io
http:
paths:
- path: /*
backend:
servicePort: 80
serviceName: sentinel-srv
A feature that was pushed out around November 2021 through the AWS ALB Ingress Controller ~v2.0 allows you to create just a single ALB for multiple Ingresses across multiple namespaces - if required by using the annotation alb.ingress.kubernetes.io/group.name: [InsertValue]
All ingress resources created with the same group.name will have the same ALB.
Unfortunately the tool being used for your usecase is wrong. AWS Load Balancer Controller will create a new load balancer for every ingress resource and I think, it makes a network load balancer for every service resource.
For your use-case, the best option is to use nginx ingress controller. You can deploy the nginx ingress controller in any 1 namespace and then create ingress resources throughout your cluster and you can have path/hostname based routing across your cluster.
In case you have many teams/projects/applications and you want to avoid a single point of failure where all your apps depend on 1 ELB, you can deploy more than 1 nginx ingress controller in your k8s cluster.
You just need to define a ingress-class variable in your nginx ingress controller deployment and add that ingress-class annotation on your applications. This way, applications having ingress-class:nginxA annotation will be clustered with the nginx ingress controller that has ingress-class=nginxA in its deployment.

Blazor server through k8s ingress controller

I've written a small Blazor application which looks to work all well when containerized and when accessed through k3s port forwarding, but struggling to find a guideline on how that application needs to be correctly exposed via a ingress controller. To show this:
If I run the Blazor application and access via port-forwarding (blazor routing works perfectly well etc.):
kubectl port-forward deployment/ 8000:80
and page routing working as expected
However, when I add a clusterIP service to the deployment and connect to it through Traefik ingress controller, I get:
and changing the route will give a 404 page not found error
My Ingress serviceIp and ingress controller setup:
ClusterIP:
apiVersion: v1
kind: Service
metadata:
name: driverpassthrough
spec:
selector:
app: driverpassthrough
ports:
- name: ui
protocol: TCP
port: 8010
targetPort: 80
ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-test
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /passthrough
backend:
serviceName: driverpassthrough
servicePort: 8010
so in my case, I use k3s and traefik. I also have 3 replicas of my blazor server app. To make it work, I had to enable sticky session (in annotations) on the cluster ip like so:
Service
apiVersion: v1
kind: Service
metadata:
name: qscale-healthcheck-service
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
labels:
name: qscale-healthcheck-service
spec:
type: ClusterIP
selector:
app: healthcheck
ports:
- name: http
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: qscale-healthcheck-service
servicePort: 80
This is the link where I found the annotation: Traefik Doc

K8s service LB to external services w/ nginx-ingress controller

Is it possible to configure k8s nginx-ingress as a LB to have a K8s Service actively connect to external backend hosted on external hosts/ports (where one will be enabled at a time, connecting back to the cluster service)?
Similar to envoy proxy? This is on vanilla K8s, on-prem.
So rather than balance load from
client -> cluster -> service.
I am looking for
service -> nginx-ingress -> external-backend.
Define a Kubernetes Service with no selector. Then you need to define a Endpoint. You can put the IP and port in the Endpoint. Normally you do not define Endpoints for Services but because the Service will not have a Selector you will need to provide an Endpoint with the same name as the Service.
Then you point the Ingress to the Service.
Here's an example that exposes an Ingress on the cluster and sends the traffic to 192.168.88.1 on TCP 8081.
apiVersion: v1
kind: Service
metadata:
name: router
namespace: default
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8081
---
apiVersion: v1
kind: Endpoints
metadata:
name: router
namespace: default
subsets:
- addresses:
- ip: 192.168.88.1
- ip: 192.168.88.2 # As per question below
ports:
- port: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: router
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: my-router.domain.com
http:
paths:
- path: /
backend:
serviceName: router
servicePort: 80
While defining ingress use nginx.ingress.kubernetes.io/configuration-snippet annotation. Enable also proxy protocol using use-proxy-protocol: "true".
Using this annotation you can add additional configuration to the NGINX location.
Please take a look: ingress-nginx-issue, advanced-configuration-with-annotations.

Ingress without ip address

I create a ingress to expose my internal service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
But when I try to get this ingress, it show it has not ip address.
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
The service show under below.
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
Note: I'm guessing because of the other question you asked that you are trying to create an ingress on a manually created cluster with kubeadm.
As described in the docs, in order for ingress to work, you need to install ingress controller first. An ingress object itself is merely a configuration slice for the installed ingress controller.
Nginx based controller is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use MetalLB. Otherwise you can deploy ingress-nginx over a node port: see details here
Finally, servicePort in your ingress object should be 3000, same as port of your service.

Best approach to expose kubernetes ingress?

i'm currently using nginx ingress to expose my apps to outside, currently my approach is like this. My question is this the best way to do it? or if not what would be the best practice.
nginx ingress controller service:-
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: LoadBalancer
ports:
- port: 80
name: http
protocol: "TCP"
- port: 443
name: https
protocol: "TCP"
selector:
app: nginx-ingress-lb
ingress:-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
tls:
- hosts:
- myservice.example.com
secretName: sslcerts
rules:
- host: myservice.example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /
so basically i have my nginx ingress controller pods running and expose to a service type of Loadbalacner and i have rules define to determine the routes.
The best approach depends on the environment you will choose to build your cluster, whether you consider using Cloud provider or Bare-metal solutions.
In your example, I guess you are using Cloud provider as a Loadbalancer provisioner, which delivers External IP address as an entry point for your NGINX Ingress Controller. Therefore, you have a quite good possibility to scale your Ingress on demand with various features and options.
I found this Article a very useful to make the comparison between implementation of NGINX Ingress Controller on Cloud and Bare-metal environments.