Enabling sticky sessions with nginx ingress, not working - kubernetes

I have a v1.8.4 deployment running nginx ingress controller. I had an ingress which works fine. But now I am trying to enable sticky sessions in it. I used kubectl edit ing mying to add these annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: md5
nginx.ingress.kubernetes.io/session-cookie-name: foobar
But sticky sessions are still not working. Nginx config does not have anything about sticky sessions. Also, kubectl describe ing mying does not show the annotations. What is going wrong here?
I also tried the example for sticky sessions here.
Describing the ingress does not show the annotations.

Because item host(in ingress.yml) cannot be empty or wildzard (*.example.com).
Make sure your host such as test.example.com(if u don't have dns, please config it in your local hosts),then test
curl -I http://test.example.com/test/login.jsp
then u will see
Set-Cookie: route=ebfcc90982e244d1d7ce029b98f8786b; Expires=Sat, 03-Jan-70 00:00:00 GMT; Max-Age=172800; Domain=test.example.com; Path=/test; HttpOnly
The official example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /

Related

Kubernetes ingress connnection types and specific connection

On Kubernetes, I want the first connection to be the pod using less CPU, and the incoming connections to be sticky sessions. How do I do this?
I try this and sticky session support, but I want first connection must come to least connection,least bandwidth or something.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "stickounet"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service-svc
port:
number: 80
Using load balancer like nginx or treafik your request will automatically be routed to pod or node with less resource utilization rates and this document describes the process of configuring sticky connection in a step by step procedure.

Kubernetes nginx ingress: Server-side HTTPS enforcement through redirect

I've setup my ingress-controller in aws EKS. I've added cert-manager.io/cluster-issuer: "letsencrypt-staging" for my ingress. The problem is every time the backend sends 307 redirect in http. The ingress controller sends same http to frontend, causing Mixed Block error from the browser.
Here is my sample ingress:
kind: Ingress
metadata:
name: example-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- kube.example.com
secretName: kube-tls
rules:
- host: kube.example.com
http:
paths:
- pathType: Prefix
path: /api/v1/
backend:
service:
name: service-nodeport
port:
number: 8000
According to the documentation, it should have redirect to https by default (link). Is it different for 307 redirect?
Additional Details:
Installing nginx ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.2/deploy/static/provider/aws/deploy.yaml
Installing cert-manager
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.16.1/cert-manager.yaml
setting up clusterissuers and certificate issuing is done as per the documentation
Describe Ingress
Name: example-nginx-ingress
Namespace: dev
Address: a629a[MASKED]-a2f82eec00a54190.elb.ap-south-1.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
kube-tls terminates kube.example.com
Rules:
Host Path Backends
---- ---- --------
kube.example.com
/api/v1/ svc-nodeport:8000 (192.168.7.44:8001)
/ webapp-nodeport:80 (192.168.4.90:80)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
Events: <none>
Deployment and service are setup as basic
svc-nodeport is backed by gunicorn (i.e. a python webserver) container and webapp-nodeport is backed by nginx container.
Alternative way I thought
I tried to add location snippet in my ingress, to actually remove the trailing slash (/) from request. But It didn't worked. You can help me on this too.
metadata:
annotations:
nginx.ingress.kubernetes.io/location-snippet: |
location = /api/v1/ {
rewrite api/v1/$1 ^api/v1/(.*)/$ break;
}
It could be the HTTP Strict Transport Security. Try set hsts to false.

Ingress vs Direct Nginx Deployment on On-premise Kuber Cluster

I am setting up a kubernetes cluster in the On-Premise servers. Now for setting up external traffic, I can run Nginx Ingress behind Nodeport or I can run Nginx Deployment(Pods) with NodePort service exposed.
The only difference I got to know is with Ingress, I will get the sticky sessions which I anyhow do not need. So which one I should prefer and why?
Apart from this, I also have one requirement on Nginx Caching of htmls(with purging logic). So I have Nginx Deplpyment, then I can use PVC and PV. But what if I use Nginx Ingress. How will it work then.
When you expose a Nginx Deployment you essentially create a L4 load balancer with Ingress you are creating a L7 load balancer.
If you want to host multiple domains like example1.com, example2.com so on the having a L7 load balancer makes sense also you can have defaulted backend defined if you want the request to endup somewhere special like some special service or endpoint.
Coming to 2nd part of enabling cache you can do it in ingress controller as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mywebsite
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-buffering: "on" # Important!
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
say you want to enable it for 1 path not for others, like you want to enable it for /static/ path and not for / path then you can have:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite
annotations:
ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
spec:
tls:
- secretName: mysite-ssl
hosts:
- mysite.example.com
rules:
- host: mysite.example.com
http:
paths:
- path: /
backend:
serviceName: mysite
servicePort: http
---
# Leverage nginx-ingress cache for /static/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysite-static
annotations:
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_cache static-cache;
proxy_cache_valid 404 10m;
proxy_cache_use_stale error timeout updating http_404 http_500 http_502 http_503 http_504;
proxy_cache_bypass $http_x_purge;
add_header X-Cache-Status $upstream_cache_status;
spec:
rules:
- host: mysite.example.com
http:
paths:
- path: /static/
backend:
serviceName: mysite
servicePort: http
Ultimately the design decision is yours, honestly its better to use ingress controller as it gives way more flexibility.
I hope this clears this up for you.

Nginx ingress controller still redirect to SSL

I am experiencing this issue. My application needs to receive connection under SSL only with WebSocket. HTTP requests should be forced to not being redirected. My ingress configuration is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: in-camonline
namespace: cl5
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/websocket-services: "svc-ws-api"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/affinity: "ClientIP"
spec:
tls:
- hosts:
- foo.bar.com
secretName: cl5-secret
rules:
- host: foo.bar.com
http:
paths:
- path: /socket.io
backend:
serviceName: svc-ws-api
servicePort: 8000
- path: /
backend:
serviceName: svc-http-service
servicePort: 80
I also disabled the ssl-redirect globally adding an item into the ConfigMap
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
#use-proxy-protocol: "false"
ssl-redirect: "false"
Now if I do request using curl, requests won't being redirected. If I try to run my front-end application every request after the WSS will be forced to being redirected to use HTTPS
Request URL: http://foo.bar.com/2/symbols
Request Method: OPTIONS
Status Code: 307 Internal Redirect
Referrer Policy: no-referrer-when-downgrade
Any suggestion about how to achieve that?
Finally, I sorted it out. If someone is reading this, easy you are not alone!
Jokes aside, nginx-controller was setting header Strict-Transport-Security after the first HTTPS call (socket.io polling in my case). This header forces the browser to use TLS for the next requests. You can read more about this header here https://developer.mozilla.org/it/docs/Web/HTTP/Headers/Strict-Transport-Security
What I did is to disable the option adding the entry hsts: false on the ingress-controller's ConfigMap object.
You can find more here https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#hsts
Hope this can help you :)
There is another solution.
If you want to disable HSTS just make the max-age zero. like this!!
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($scheme = https) {
add_header Strict-Transport-Security "max-age=0;";
}
link : https://justin-g.tistory.com/176

Traefik & Keycloak: error SSL_ERROR_RX_RECORD_TOO_LONG

I use an HAProxy to redirect all requests from 80 port to a 443 and using a NodePort to enter on a traefik-ingress-controller (v1.6.6, inside a Kubernetes cluster).
Here the HAProxy.conf:
frontend http-frontend
bind *:80
reqadd X-Forwarded-Proto:\ http
default_backend http_app
frontend https-frontend
bind *:443 ssl crt /etc/ssl/certs/my-cert.pem
reqadd X-Forwarded-Proto:\ https
default_backend traefik_app
backend http_app
redirect scheme https if !{ ssl_fc }
backend traefik_app
server traefik localhost:30010 check
Every application running on my Kubernetes cluster has an Ingress.
Among them I have a Keycloak pod (v4.1.0, for the authentication) with this ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: login.myapp.it
http:
paths:
- backend:
serviceName: keycloak
servicePort: 8080
Here a picture:
When I connect to https://login.myapp.it/auth/admin/ I get redirected to
https://login.myapp.it:80/auth/admin/master/console/ (note the port 80) and I received an SSL_ERROR_RX_RECORD_TOO_LONG error.
Someone has some hints for this redirect issue with keycloak behind proxy?
Thank you in advance.
Sounds like you are missing your TLS certs on your ingress:
$ kubectl -n kube-system create secret tls your-k8s-tls-secret --key=tls.key --cert=tls.crt
Then:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
annotations:
kubernetes.io/ingress.class: traefik
spec:
tls:
- secretName: your-k8s-tls-secret
rules:
- host: login.myapp.it
http:
paths:
- backend:
serviceName: keycloak
servicePort: 8080
Hope it helps!
I solved my issue using the following traefik annotation:
traefik.frontend.passHostHeader: "true"
that forwards client Host header to the backend.
Here a complete ingress example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.passHostHeader: "true"
spec:
rules:
- host: login.myapp.it
http:
paths:
- backend:
serviceName: keycloak
servicePort: 8080
In alternative I may have added to haproxy.cfg the following:
reqadd X-Forwarded-Port:\ 443