Nginx ingress : combining rewrite-target and cookie sticky affinity annotations - kubernetes

I want to build an app with sticky sessions, in order to keep each user to a dedicated pod in my kubernetes engine
Using a Nginx Ingress, it possible to use both annotations rewrite-target and affinity: cookie at the same time ?
Here is my Ingress metadata: section
metadata:
name: front
annotations:
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
# sticky session
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "http-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "18000"
nginx.ingress.kubernetes.io/session-cookie-max-age: "18000
When calling my app I see in logs I'm calling pods randomly...
I've see there has been a resolved issue for this problem here : https://github.com/kubernetes/ingress-nginx/issues/1232
So it should work for me but it doesnt, do you know why ? Or do you know how to debug this issue ? thanks

Related

Nginx Ingress returns 413 Entity Too Large

on my Cluster, I'm trying to upload a big file but when I try, I get the 413 Error
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.19.3</center>\r\n</body>\r\n</html>\r\n"
I know that this is caused by a default parameter of nginx and I need to override it. On the documentation I've found that this can be done using two ways:
using annotation on ingress config
using a configMap
I have tried both ways with no result.
Here are my yaml:
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "700m"
name: nginx-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
path: /
and configmap.yml:
apiVersion: v1
data:
proxy-body-size: "800m"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
labels:
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/part-of: nginx-ingress
For future searchs, It works for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
name: docker-registry
namespace: docker-registry
spec:
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
for node.js you might set
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb' }));
I was with the same problem using Nginx Ingress on GKE.
These are the annotations that worked for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/client-max-body-size: "0"
nginx.org/proxy-connect-timeout: 600s
nginx.org/proxy-read-timeout: 600s
Don't forget to put the correct values for you.
For more details you can see these docs:
Using Annotations
Client Max Body Size
PS I've installed my "Nginx Ingress Controller" following this tutorial.
There are some good answers here already to avoid that 413.
As for example editing the Ingress (better redeploying) with the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
Furthermore, for NGINX an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size reference.
There are few things to have in mind while setting this up:
It is better to recreate the Ingress object rather than edit it to make sure that the configuration will be loaded correctly. Delete the Ingress and recreate it again with the proper annotations.
If that's still not working you can try to use the ConfigMap approach, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
proxy-body-size: "8m"
Notice that setting size to 0 disables checking of client request body size.
Remember to set the value greater than the size of data that you are trying to push.
Use the proper apiVersion based on your Kubernetes version. Notice that:
The extensions/v1beta1 and networking.k8s.io/v1beta1 API versions
of Ingress will no longer be served in v1.22.
Migrate manifests and API clients to use the networking.k8s.io/v1 API version, available since v1.19.
All existing persisted objects are accessible via the new API
Below configurations worked out for me:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cs-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 16m
reference: https://github.com/kubernetes/ingress-nginx/issues/4825
Maybe my experience will help somebody. All my settings were Ok in nginx, but nginx stood behind CloudFlare in Proxy mode. So try to set it to "DNS only" in order to make sure there is nothing between you and nginx.

How to remove the server header from Kubernetes deployed applications

I am asking this question in the style of question then answer.
If you create your Ingress objects for Helm charts or regular "kubectl apply" deployments, after deployment to your cluster, you might see the server header in your responses. This is regarded as a security concern. It should not be present.
You might not have control of your cluster or Ingress Controllers. How can you remove the header in question?
You might not have control of your cluster or Ingress Controllers, but you do have control of your Ingress manifests.
In each of your Ingress manifest files (maybe inside your Helm charts) you can update your Ingress definition(s).
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name}}-{{ .Values.baseName }}-ingress-spa
namespace: {{ .Values.global.config.namespace }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
more_clear_headers "Server";
spec:
tls:
- hosts:
The key part is:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_clear_headers "Server";
This instructs nginx to clear the server header. After redeploying your application you should now see:
And voila, the server header is gone.
You can do this for the whole nginx ingress controller in the settings ConfigMap:
server-tokens: "false"

EKS, ELB, Nginx Ingress - Right Combination for Sticky Sessions/Session Affinity and showing real Client IP

Trying to figure out what the right settings are, to be able to have the clients real IP show up in our logs, and for session affinity to work.
I am not getting the client IPs in the logs now, and if i move from 1 pod, to 2, I can no longer log in etc. The nginx logs dont seem to have anything in them showing a problem.
Values.yml
controller:
config:
use-forwarded-headers: "true"
use-proxy-protocol: "true"
proxy-real-ip-cidr: "172.21.0.0/16"
replicaCount: 2
image:
repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
tag: "0.28.0"
ingressClass: ingress-internal
publishService:
enabled: true
service:
externalTrafficPolicy: Local
targetPorts:
http: 80
https: http
loadBalancerSourceRanges: ["0.0.0.0/0"]
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:523447765480:certificate/3972f84d-c167-43da-a207-8be0b955df48"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Name=idaas2-ingress-internal,cluster=idaas2,Environment=prd,Project=idaas2,Customer=idauto"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-02ca93f2fe8cbc950"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
Ingress Annotation
ingress:
annotations:
kubernetes.io/ingress.class: ingress-internal
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
monitor.stakater.com/enabled: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
Im not even sure where to continue searching, I can provide any additional information required.
Not sure how to fix the real client IP but I made the sticky sessions working with this in Ingress metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=1200
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
I just increased my replica from 1 to 2, and got into the same situation. Because I do like a lot knowing from where the user on my application is based on (I don't want to go into Nginx controller logs to know its ip, I want to receive it by email sometimes hihi)
But now everything is OK (in 24 hours of thinking)
I am using proxy protocol v2 (to get the real IP) and session affinity both combined with Nginx.
I just give you a pick of my set up
helm upgrade nginx ingress-nginx/ingress-nginx --set-string controller.config."use-gzip"="true",controller.config."http-redirect-code"="301",controller.config."use-proxy-protocol"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-proxy-protocol-v2"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-use-hostname"="true",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-sticky-sessions"="cookie",controller.service.annotations."service\.beta\.kubernetes\.io/scw-loadbalancer-sticky-sessions-cookie-name"="route"
Then in you backend pod use these annotations
nginx.ingress.kubernetes.io/websocket-services: "footballdata-scaleway"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/session-cookie-path: /
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
My cluster is on scaleway.com by the way.
You hurt yourself making Kubernetes charts on you own. Switch to helm 3.

Enabling sticky sessions with nginx ingress, not working

I have a v1.8.4 deployment running nginx ingress controller. I had an ingress which works fine. But now I am trying to enable sticky sessions in it. I used kubectl edit ing mying to add these annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: md5
nginx.ingress.kubernetes.io/session-cookie-name: foobar
But sticky sessions are still not working. Nginx config does not have anything about sticky sessions. Also, kubectl describe ing mying does not show the annotations. What is going wrong here?
I also tried the example for sticky sessions here.
Describing the ingress does not show the annotations.
Because item host(in ingress.yml) cannot be empty or wildzard (*.example.com).
Make sure your host such as test.example.com(if u don't have dns, please config it in your local hosts),then test
curl -I http://test.example.com/test/login.jsp
then u will see
Set-Cookie: route=ebfcc90982e244d1d7ce029b98f8786b; Expires=Sat, 03-Jan-70 00:00:00 GMT; Max-Age=172800; Domain=test.example.com; Path=/test; HttpOnly
The official example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /

Ingress and SSL Passthrough

I have recently been using the nginxdemo/nginx-ingress controller.
As I understand it this controller cannot do SSL Passthrough (by that I mean pass the client certificate all the way through to the backend service for authentication), so instead I have been passing the clients subject DN through a header.
Ultimately I would prefer SSL-Passthrough and have been looking at the kubernetes/ingress-nginx project which apparently supports SSL passthrough.
Does anyone have an experience with this controller and SSL Passthrough.
The few Ingress examples showing passthrough that I have found leave the path setting blank.
Is this because passthrough has to take place at the TCP level (4) rather then at HTTP (7)?
Right now, I have a single host rule that services mutiple paths.
completing on lch answer I would like to add that I had the same problem recently and I sorted it out modifiying the ingress-service deployment (I know, it should be a DaemonSet but that's a different story)
The change was adding the parameter to spec.containers.args:
--enable-ssl-passthrough
Then I've added the following annotations to my ingress:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
The important one are secure-backends and ssl-passthrough but I think the rest are a good idea, provided you're not expecting http traffic there
SSH-Passthrough is working fine for me. Here is the Official Documentation
And here is an example usage:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-ingress
namespace: my-service
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
rules:
- host: my.example.com
http:
paths:
- backend:
serviceName: my-service