Nginx Ingress returns 413 Entity Too Large - kubernetes

on my Cluster, I'm trying to upload a big file but when I try, I get the 413 Error
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.19.3</center>\r\n</body>\r\n</html>\r\n"
I know that this is caused by a default parameter of nginx and I need to override it. On the documentation I've found that this can be done using two ways:
using annotation on ingress config
using a configMap
I have tried both ways with no result.
Here are my yaml:
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "700m"
name: nginx-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
path: /
and configmap.yml:
apiVersion: v1
data:
proxy-body-size: "800m"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
labels:
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/part-of: nginx-ingress

For future searchs, It works for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
name: docker-registry
namespace: docker-registry
spec:

For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
for node.js you might set
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb' }));

I was with the same problem using Nginx Ingress on GKE.
These are the annotations that worked for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/client-max-body-size: "0"
nginx.org/proxy-connect-timeout: 600s
nginx.org/proxy-read-timeout: 600s
Don't forget to put the correct values for you.
For more details you can see these docs:
Using Annotations
Client Max Body Size
PS I've installed my "Nginx Ingress Controller" following this tutorial.

There are some good answers here already to avoid that 413.
As for example editing the Ingress (better redeploying) with the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
Furthermore, for NGINX an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size reference.

There are few things to have in mind while setting this up:
It is better to recreate the Ingress object rather than edit it to make sure that the configuration will be loaded correctly. Delete the Ingress and recreate it again with the proper annotations.
If that's still not working you can try to use the ConfigMap approach, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
proxy-body-size: "8m"
Notice that setting size to 0 disables checking of client request body size.
Remember to set the value greater than the size of data that you are trying to push.
Use the proper apiVersion based on your Kubernetes version. Notice that:
The extensions/v1beta1 and networking.k8s.io/v1beta1 API versions
of Ingress will no longer be served in v1.22.
Migrate manifests and API clients to use the networking.k8s.io/v1 API version, available since v1.19.
All existing persisted objects are accessible via the new API

Below configurations worked out for me:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cs-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 16m
reference: https://github.com/kubernetes/ingress-nginx/issues/4825

Maybe my experience will help somebody. All my settings were Ok in nginx, but nginx stood behind CloudFlare in Proxy mode. So try to set it to "DNS only" in order to make sure there is nothing between you and nginx.

Related

Get an error 'unknown field "data"' when try to deploy an Ingress object to a kube cluster

Here is the config of the Ingress object:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
labels:
app: test-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
nginx.ingress.kubernetes.io/proxy-read-timeout: 5000
nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
data:
proxy-hide-headers: "Server"
server-tokens: "False"
spec:
rules:
...
When I do kubectl apply to create this Ingress I got the next error:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress): unknown field "data" in io.k8s.api.networking.v1beta1.Ingress
The cluster version is 1.21.0.
After a long search in google didn't find any clue why this error can happen, didn't find any deprecation of this field. Please help.
So looking by error it says that apiversion (networking.k8s.io/v1beta1) you mentioned in yaml file is unable to identify the object data.
It looks like policies version has been changed from v1beta1 to v1. could you verify the version you have or change it to v1 and try.
Ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
Thanks
data is an invalid object for the api-resource ingress.
data:
proxy-hide-headers: "Server"
server-tokens: "False"
remove the above fields and try to add them via a configmap for the ingress.
configmap.yaml
apiVersion: v1
data:
proxy-hide-headers: "Server"
server-tokens: "False"
kind: ConfigMap
metadata:
name: test-ingress-cm
And configure your pod to mount the nginx configmap. To know the allowed objects always run kubectl explain api-resource eg: kubectl explain ingress.
More details on custom configuration can be found in kubernetes.github.io site.

How to set max request body size in traefik ingress controller for kubernetes?

It's very easy to set up max request body size with nginx using client_max_body_size directive. How can I do the same with kubernetes traefik ingress controller. I know there is maxrequestbodybytes directive to do so but, I'm lost with how to set it up in yaml file describing my ingress.
Wasn't so easy to figure this out. There is a funky multiline way of specifying this config in yaml file. Please check option traefik.ingress.kubernetes.io/buffering to see he pipe (|) oprator in acction.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
labels:
domain: example.com
deployment: production
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http
traefik.ingress.kubernetes.io/buffering: |
maxrequestbodybytes: 31457280
memrequestbodybytes: 62914560
spec:
etc....

how to customize k8s nginx ingress HTTP headers error

in our cluster, there's a customized error pages backend and a auth service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/auth-url: 'http:/***/auth'
https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/
https://github.com/kubernetes/ingress-nginx/tree/master/images/custom-error-pages
From above link and the log show in error pages backend from auth service 401, it only have 8 hardcode heades like below.
2020/04/14 03:24:35 request info &{GET /?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg HTTP/1.0 1 0 map[Accept:[/] Accept-Encoding:[gzip, deflate, br] Cache-Control:[no-cache] Connection:[close] Postman-Token:[c7d07b51-5e3d-469d-9ec4-73be2cf5cd26] User-Agent:[PostmanRuntime/7.24.0] X-Code:[401] X-Format:[/] X-Ingress-Name:[static-api-ingress] X-Namespace:[default] X-Original-Uri:[/xxxx/-76.9,38.9,15/1000x1000#1x?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg] X-Service-Name:[static-api-svc] X-Service-Port:[80]] {} 0 [] true api.staging.versalinks.net map[] map[] map[] 172.20.0.70:11440 /?access_token=pk1.eyJ1Ijoid2ViZXJ0YW8iLCJhIjoiY2pibTdmaWc2MTZqaDJybzFzcm93bGE2eiJ9.cwSE9DYCYP0dIeY4Hhp6Kg 0xc00009e0c0}
Is it possible we can add some customized header X-AUTH-INFO from auth service?
You can use Custom Headers for nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.
You should define ConfigMap with your custom headers:
apiVersion: v1
data:
X-Different-Name: "true"
X-Request-Start: t=${msec}
X-Using-Nginx-Controller: "true"
kind: ConfigMap
metadata:
name: custom-headers
namespace: ingress-nginx
This defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.
apiVersion: v1
data:
proxy-set-headers: "ingress-nginx/custom-headers"
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
This defines a ConfigMap in the ingress-nginx namespace named nginx-configuration. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.
The nginx ingress controller will read the ingress-nginx/nginx-configuration ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.

Kong Ingress Controller has no effect on Kong Plugins

I have gone through kong-ingress-controller deployment and getting started doc and done everything mentioned.
Update User Permissions
Deploy Kong Ingress Controller
Setup environment variables
Created Ingress with Routes
Everything works fine, I can access my applications based on the routes. But when I added rate-limit plugins or any other plugins it does not take any effect.
ingress.yaml :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: kong
plugins.konghq.com: http-ratelimit, http-auth
spec:
rules:
- host: foo.bar
http:
paths:
- path: /users
backend:
serviceName: my-service
servicePort: 80
rate-limit.yaml :
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
labels:
global: 'true'
config:
minute: 5
plugin: rate-limiting
But the rate limit plugin has no effect on my ingress.
NB: The kong-ingress-controller is in kong namespace but the other resources are in default namespace. I tried to move everything to kong namespace then the plugins works but service does not work as it is in default namespace.
Thanks in advance.
Looking at the Kong docs, the rate-limit YAML looks correct. If the resource is configured correctly, Kong is not matching the request against the ingress resource because the user is not sending the correct request.
KongPlugin, KongIngress should be in same namespace as Service. YAML provides looks correct.
There must be something wrong in ingress yamls annotation and configuration.Is your service annotated with Ingress object?
I think you need to add this annotation to your KongPlugin:
annotations:
kubernetes.io/ingress.class: kong
So try with
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
annotations:
kubernetes.io/ingress.class: kong
[...]
In my scenario, I wanted to apply the KongPlugin on a specific Ingress Resource/Route.
What worked for me was to create the KongPlugin object in the same namespace where the Ingress Resource (and therefore, the target service) lived.

GKE Ingress Basic Authentication (ingress.kubernetes.io/auth-type)

I'm trying to get a GKE ingress to require basic auth like this example from github.
The ingress works fine. It routes to the service. But the authentication isn't working. Allows all traffic right through. Has GKE not rolled this feature out yet? Something obviously wrong in my specs?
Here's the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: super-ingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
rules:
- host: zzz.host.com
http:
paths:
- backend:
serviceName: super-service
servicePort: 9000
path: /*
And the basic-auth secret:
$ kubectl get secret/basic-auth -o yaml
apiVersion: v1
data:
auth: XXXXXXXXXXXXXXXXXXX
kind: Secret
metadata:
creationTimestamp: 2016-10-03T21:21:52Z
name: basic-auth
namespace: default
resourceVersion: "XXXXX"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: XXXXXXXXXXX
type: Opaque
Any insights are greatly appreciated!
The example you linked to is for nginx ingress controller. GKE uses GLBC, which doesn't support auth.
You can deploy an nginx ingress controller in your gke cluster. Note that you need to annotate your ingress to avoid the GLBC claiming the ingress. Then you can expose the nginx controller directly, or create a glbc ingress to redirect traffic to the nginx ingress (see this snippet written by bprashanh).