EKS, ELB, Nginx Ingress - Right Combination for Sticky Sessions/Session Affinity and showing real Client IP - kubernetes

Trying to figure out what the right settings are, to be able to have the clients real IP show up in our logs, and for session affinity to work.
I am not getting the client IPs in the logs now, and if i move from 1 pod, to 2, I can no longer log in etc. The nginx logs dont seem to have anything in them showing a problem.
Values.yml
controller:
config:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
proxy-real-ip-cidr: "172.21.0.0/16"
replicaCount: 2
image:
repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
tag: "0.28.0"
ingressClass: ingress-internal
publishService:
enabled: true
service:
externalTrafficPolicy: Local
targetPorts:
http: 80
https: http
loadBalancerSourceRanges: ["0.0.0.0/0"]
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:523447765480:certificate/3972f84d-c167-43da-a207-8be0b955df48"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Name=idaas2-ingress-internal,cluster=idaas2,Environment=prd,Project=idaas2,Customer=idauto"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-02ca93f2fe8cbc950"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
Ingress Annotation
ingress:
annotations:
kubernetes.io/ingress.class: ingress-internal
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
monitor.stakater.com/enabled: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
Im not even sure where to continue searching, I can provide any additional information required.

Not sure how to fix the real client IP but I made the sticky sessions working with this in Ingress metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=1200
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip

I just increased my replica from 1 to 2, and got into the same situation. Because I do like a lot knowing from where the user on my application is based on (I don't want to go into Nginx controller logs to know its ip, I want to receive it by email sometimes hihi)
But now everything is OK (in 24 hours of thinking)
I am using proxy protocol v2 (to get the real IP) and session affinity both combined with Nginx.
I just give you a pick of my set up
helm upgrade nginx ingress-nginx/ingress-nginx --set-string controller.config."use-gzip"="true",controller.config."http-redirect-code"="301",controller.config."use-proxy-protocol"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-proxy-protocol-v2"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-use-hostname"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-sticky-sessions"="cookie",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-sticky-sessions-cookie-name"="route"
Then in you backend pod use these annotations
nginx.ingress.kubernetes.io/websocket-services: "footballdata-scaleway"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-path: /
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
My cluster is on scaleway.com by the way.
You hurt yourself making Kubernetes charts on you own. Switch to helm 3.

Related

Nginx ingress : combining rewrite-target and cookie sticky affinity annotations

I want to build an app with sticky sessions, in order to keep each user to a dedicated pod in my kubernetes engine
Using a Nginx Ingress, it possible to use both annotations rewrite-target and affinity: cookie at the same time ?
Here is my Ingress metadata: section
metadata:
name: front
annotations:
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
# sticky session
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "http-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "18000"
nginx.ingress.kubernetes.io/session-cookie-max-age: "18000
When calling my app I see in logs I'm calling pods randomly...
I've see there has been a resolved issue for this problem here : https://github.com/kubernetes/ingress-nginx/issues/1232
So it should work for me but it doesnt, do you know why ? Or do you know how to debug this issue ? thanks

Kubernetes ingress connnection types and specific connection

On Kubernetes, I want the first connection to be the pod using less CPU, and the incoming connections to be sticky sessions. How do I do this?
I try this and sticky session support, but I want first connection must come to least connection,least bandwidth or something.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "stickounet"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-svc
port:
number: 80
Using load balancer like nginx or treafik your request will automatically be routed to pod or node with less resource utilization rates and this document describes the process of configuring sticky connection in a step by step procedure.

Problem with SSL passthrough in Nginx Controller

I'm forced to use Nginx Ingress, but it's complicated and doesn't fulfill our requirements.
I tried to route traffic from Nginx ingress to Traefik, but it seems that redirection from HTTP to HTTPS doesn't work.
Here is how my Ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
name: some-ingress
spec:
tls:
- hosts:
- "example.com"
secretName: some-secret
rules:
- host: "example.com"
http:
paths:
- backend:
serviceName: traefik
servicePort: 443
I don't know how to fix that. I tried different ways. Maybe there is another way to route HTTP traffic on port 80 to traefik service on port 80 and 443 to traefik service on port 443 (?)
Unfortunately, I'm not able to use any external load balancer because it's not provided. I'm aware that there is something called MetalLB, but I'm not able to fulfill all requirements.
Thank you in advance!
Did you try to add the backend-protocol annotation?
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

GKE Nginx Ingress - Assigning static ip

I have an ingress controller in a GKE cluster with ingress.class:
kubernetes.io/ingress.class: "nginx"
I wish to assign a static ip to this ingress controller. I followed this tutorial for creating and assigning the static ip:
https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip
Basically I reserved a static IP and tried to assign it to the ingress using:
kubernetes.io/ingress.global-static-ip-name: "my-ingress-static-ip"
The Problem
The ip address ingress did not changed to the new assigned static ip.
How should I assign this static IP to the ingress?
My Configuration
Controller deployed using:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/cloud/deploy.yaml
My Ingress yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: development
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
kubernetes.io/ingress.class: "nginx"
# Disallow http - Allowed only with gce controller
# kubernetes.io/ingress.allow-http: "false"
# Enable client certificate authentication
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
# Create the secret containing the trusted ca certificates
nginx.ingress.kubernetes.io/auth-tls-secret: "development/api-ingress-ca-secret"
# Specify the verification depth in the client certificates chain
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
# Automatically redirect http to https
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# Use regex in paths
nginx.ingress.kubernetes.io/use-regex: "true"
# For notifications we add the proxy headers
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Set a static ip for the ingress
kubernetes.io/ingress.global-static-ip-name: "my-ingress-static-ip"
spec:
tls:
- hosts:
- my-host.com
secretName: api-tls-certificate
rules:
- host: my-host.com
http:
paths:
- path: /(v[0-9]/.*)
backend:
serviceName: my-service
servicePort: 443
Deleting the ingress or the controller did not fixed the problem.
That tutorial is only for the GCE ingress controller.
Note: This tutorial does not apply to the NGINX Ingress Controller.
To set the IP address, you need to specify the actual ip address in the spec: section of the LoadBalancer service.
spec:
type: LoadBalancer
externalTrafficPolicy: Local
loadBalancerIP: ACTUAL.IP.ADRESS.HERE
ports:
As a note, make sure that your ip address is a regional static IP and not a global IP. This took me quite a while to figure out.

Ingress vs Direct Nginx Deployment on On-premise Kuber Cluster

I am setting up a kubernetes cluster in the On-Premise servers. Now for setting up external traffic, I can run Nginx Ingress behind Nodeport or I can run Nginx Deployment(Pods) with NodePort service exposed.
The only difference I got to know is with Ingress, I will get the sticky sessions which I anyhow do not need. So which one I should prefer and why?
Apart from this, I also have one requirement on Nginx Caching of htmls(with purging logic). So I have Nginx Deplpyment, then I can use PVC and PV. But what if I use Nginx Ingress. How will it work then.
When you expose a Nginx Deployment you essentially create a L4 load balancer with Ingress you are creating a L7 load balancer.
If you want to host multiple domains like example1.com, example2.com so on the having a L7 load balancer makes sense also you can have defaulted backend defined if you want the request to endup somewhere special like some special service or endpoint.
Coming to 2nd part of enabling cache you can do it in ingress controller as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mywebsite
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffering: "on" # Important!
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
say you want to enable it for 1 path not for others, like you want to enable it for /static/ path and not for / path then you can have:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
tls:
- secretName: mysite-ssl
hosts:
- mysite.example.com
rules:
- host: mysite.example.com
http:
paths:
- path: /
backend:
serviceName: mysite
servicePort: http
---
# Leverage nginx-ingress cache for /static/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite-static
annotations:
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: mysite.example.com
http:
paths:
- path: /static/
backend:
serviceName: mysite
servicePort: http
Ultimately the design decision is yours, honestly its better to use ingress controller as it gives way more flexibility.
I hope this clears this up for you.