in our cluster, there's a customized error pages backend and a auth service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/auth-url: 'http:/***/auth'
https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/
https://github.com/kubernetes/ingress-nginx/tree/master/images/custom-error-pages
From above link and the log show in error pages backend from auth service 401, it only have 8 hardcode heades like below.
2020/04/14 03:24:35 request info &{GET /?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg HTTP/1.0 1 0 map[Accept:[/] Accept-Encoding:[gzip, deflate, br] Cache-Control:[no-cache] Connection:[close] Postman-Token:[c7d07b51-5e3d-469d-9ec4-73be2cf5cd26] User-Agent:[PostmanRuntime/7.24.0] X-Code:[401] X-Format:[/] X-Ingress-Name:[static-api-ingress] X-Namespace:[default] X-Original-Uri:[/xxxx/-76.9,38.9,15/1000x1000#1x?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg] X-Service-Name:[static-api-svc] X-Service-Port:[80]] {} 0 [] true api.staging.versalinks.net map[] map[] map[] 172.20.0.70:11440 /?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg 0xc00009e0c0}
Is it possible we can add some customized header X-AUTH-INFO from auth service?
You can use Custom Headers for nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.
You should define ConfigMap with your custom headers:
apiVersion: v1
data:
X-Different-Name: "true"
X-Request-Start: t=${msec}
X-Using-Nginx-Controller: "true"
kind: ConfigMap
metadata:
name: custom-headers
namespace: ingress-nginx
This defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.
apiVersion: v1
data:
proxy-set-headers: "ingress-nginx/custom-headers"
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This defines a ConfigMap in the ingress-nginx namespace named nginx-configuration. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.
The nginx ingress controller will read the ingress-nginx/nginx-configuration ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.
Related
I am facing an issue with kubernetes oauth2-proxy CORS policy.
My setup:
ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: supervisor-ingress
namespace: management
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
oauth2 proxy ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
I am running react application on localhost
I strongly believe that this issue is oauth proxy related as disabling auth-url works fine.
I have tried multiple headers and cors policies, but it seems they are not applied to oauth proxy
response message:
Access to XMLHttpRequest at '' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Can I redirect the request to some custom url when kubernetes ingress controller authentication failed with url specified in nginx.ingress.kubernetes.io/auth-url annotations?
example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
creationTimestamp: 2016-10-03T13:50:35Z
Thanks.
Your requirement to:
redirect the request to some custom url when kubernetes ingress controller authentication failed
can be addressed by using a following annotation:
nginx.ingress.kubernetes.io/auth-signin: https://SOME_URL/
Explanation
Assuming that you have an Ingress manifest (only the part of it):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: http://httpbin.default.svc.cluster.local:80/basic-auth/user/password
nginx.ingress.kubernetes.io/auth-signin: https://SOME_URL/
spec:
<-- REDACTED -->
I've tested a setup like this and it will work as following:
You will be first redirected to internal Service (httpbin) by auth-url annotation, where you will be prompted for credentials.
If you fail above authentication step, you will be redirected to the auth-signin URL which for example could be: https://github.com/login (this is only an example, and not the actual login configuration).
Additional resources:
Kubernetes.github.io: Ingress nginx: User guide: Nginx configuration: Annotations
Httpbin.org
on my Cluster, I'm trying to upload a big file but when I try, I get the 413 Error
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.19.3</center>\r\n</body>\r\n</html>\r\n"
I know that this is caused by a default parameter of nginx and I need to override it. On the documentation I've found that this can be done using two ways:
using annotation on ingress config
using a configMap
I have tried both ways with no result.
Here are my yaml:
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "700m"
name: nginx-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
path: /
and configmap.yml:
apiVersion: v1
data:
proxy-body-size: "800m"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
labels:
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/part-of: nginx-ingress
For future searchs, It works for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
name: docker-registry
namespace: docker-registry
spec:
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
for node.js you might set
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb' }));
I was with the same problem using Nginx Ingress on GKE.
These are the annotations that worked for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/client-max-body-size: "0"
nginx.org/proxy-connect-timeout: 600s
nginx.org/proxy-read-timeout: 600s
Don't forget to put the correct values for you.
For more details you can see these docs:
Using Annotations
Client Max Body Size
PS I've installed my "Nginx Ingress Controller" following this tutorial.
There are some good answers here already to avoid that 413.
As for example editing the Ingress (better redeploying) with the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
Furthermore, for NGINX an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size reference.
There are few things to have in mind while setting this up:
It is better to recreate the Ingress object rather than edit it to make sure that the configuration will be loaded correctly. Delete the Ingress and recreate it again with the proper annotations.
If that's still not working you can try to use the ConfigMap approach, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
proxy-body-size: "8m"
Notice that setting size to 0 disables checking of client request body size.
Remember to set the value greater than the size of data that you are trying to push.
Use the proper apiVersion based on your Kubernetes version. Notice that:
The extensions/v1beta1 and networking.k8s.io/v1beta1 API versions
of Ingress will no longer be served in v1.22.
Migrate manifests and API clients to use the networking.k8s.io/v1 API version, available since v1.19.
All existing persisted objects are accessible via the new API
Below configurations worked out for me:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cs-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 16m
reference: https://github.com/kubernetes/ingress-nginx/issues/4825
Maybe my experience will help somebody. All my settings were Ok in nginx, but nginx stood behind CloudFlare in Proxy mode. So try to set it to "DNS only" in order to make sure there is nothing between you and nginx.
I am setting up my ingress controller, ingress class and ingress to expose a service outside the cluster. This is fresh cluster setup.
I have setup the nginx-ingress controller using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.0/deploy/static/provider/baremetal/deploy.yaml
The next step based on my understanding is to create the ingress class https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#ingress-class
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com/v1alpha
kind: IngressParameters
name: external-lb
How did they get the name of the controller example.com/ingress-controller?
I have run multiple scenarios with IngressClass, Ingress and Nginx Ingress Controller.
Scenario 1
IngressClass with custom name
Nginx Ingress Controller with default --ingress-class value which is nginx
Ingress using ingressClassName same as IngressClass name
Output: Response 404
Scenario 2
IngressClass with custom name
Nginx Ingress Controller with owningress-class ingress-test
Ingress using ingressClassName same as IngressClass name
Output: Response 404
Scenario 3
IngressClass with test name
Nginx Ingress Controller --ingress-class with value test
Ingress using test in ingressClassName
Output: Proper response
Senario 4
IngressClass with nginx name
Nginx Ingress Controller --ingress-class with value nginx
Ingress using nginx in ingressClassName
Output: Proper response
Conclusion
First of all, please keep in mind that there are 3 types of Nginx. Open Source Nginx Ingress Controller, you are probably using it. Nginx Incorporaton (nginx inc) and Nginx Incorporaton Plus.
In one of the scenarios, when I have used spec.controller: nginx.org/ingress-controller with Nginx Ingress Controller with argument --ingress-class=nginx, in Nginx Ingress Controller pod you will see entry which is pointing to k8s.io/ingress-nginx.
To reproduce this behavior, you will need to deploy IngressClass with specific controller and then deploy nginx.
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
spec:
controller: nginx.org/ingress-controller
After deploying Nginx Ingress Controller, controller pod will be in CrashLoopBackOff state. In logs you will find entry:
E1118 15:42:19.008911 8 main.go:134] Invalid IngressClass (Spec.Controller) value "nginx.org/ingress-controller". Should be "k8s.io/ingress-nginx"
It works only when IngressClass name is set to nginx.
I would say that nginx.org/ingress-controller is for Nginx Incorporated and k8s.io/ingress-nginx for Open Source Nginx Ingress.
If custom value is used for --ingress-class argument in the controller Deployment manifest, presence or absence of IngressClass object with the same name doesn't made any difference in, how the cluster works, if only you keep Ingress spec.ingressClass value the same with controller argument. Moreover, if it's present, IngressClass spec.controller can have any value that match the required pattern "domain like" and that didn't affect Ingress workflow behavior on my cluster at all.
In addition, Ingress works fine if I put the correct value of the ingress-class either to spec.ingressClass property or to metadata.annotation.kubernetes.io/ingress.class accordingly. It gives an error like the following if you try to put both values to the same Ingres object:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
ingressClassName: nginx
The Ingress "test-ingress" is invalid: annotations.kubernetes.io/ingress.class: Invalid value: "nginx": can not be set when the class field is also set
Please keep in mind it was tested only for Nginx Ingress Controlle. If you would like to use IngressClass with other Ingress Controllers like Traefik or Ambasador, you would check their release notes.
You will create the IngressClass as part of the Installation with Manifests steps (Step 3 here). That will create an IngressClass that looks like:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: nginx
# annotations:
# ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: nginx.org/ingress-controller
Then, to configure an Ingress resource to be consumed by nginx, just specify ingressClassName: nginx in the Ingress spec, as shown here, and pasted again below:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
. . .
It's very easy to set up max request body size with nginx using client_max_body_size directive. How can I do the same with kubernetes traefik ingress controller. I know there is maxrequestbodybytes directive to do so but, I'm lost with how to set it up in yaml file describing my ingress.
Wasn't so easy to figure this out. There is a funky multiline way of specifying this config in yaml file. Please check option traefik.ingress.kubernetes.io/buffering to see he pipe (|) oprator in acction.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
labels:
domain: example.com
deployment: production
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http
traefik.ingress.kubernetes.io/buffering: |
maxrequestbodybytes: 31457280
memrequestbodybytes: 62914560
spec:
etc....