kubernetes cors errror with oauth2 reverse proxy - kubernetes

I am facing an issue with kubernetes oauth2-proxy CORS policy.
My setup:
ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: supervisor-ingress
namespace: management
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
oauth2 proxy ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
I am running react application on localhost
I strongly believe that this issue is oauth proxy related as disabling auth-url works fine.
I have tried multiple headers and cors policies, but it seems they are not applied to oauth proxy
response message:
Access to XMLHttpRequest at '' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Related

How to use the Google Managed Certificate for GKE Kubernetes

I have the https being generated for frontend https for flytime.io (cloud run)
Now I want to use this for https support for backend (multi-ingress gke autopilot), from the following manual, intended to used the api.flytime.io domain:
https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress#https_support
But how should I configure the PATH_TO_KEYFILE and PATH_TO_CERTFILE with the manual (or there are other ways to do that)? If using the Google managed certification is not possible (why?), how do I generate a certificate for host name of api.flytime.io and get PATH_TO_KEYFILE and PATH_TO_CERTFILE?
If you are using GKE managed certificates, you don't use the secret method for setting up SSL in your Ingress. You have to create a ManagedCertificate object and then use the object's name in your Ingress in the networking.gke.io/managed-certificates annotation.
Here's an example. First, create the ManagedCertificate object.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- DOMAIN_NAME1 #<===== must be valid a domain you are owning
Now, reference this in your Ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: ADDRESS_NAME
networking.gke.io/managed-certificates: managed-cert #<=== HERE IS YOUR CERT
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: mc-service
port:
number: SERVICE_PORT
You can find more information on this docs page.

redirect request when authentication failed in Kubernetes ingress controller

Can I redirect the request to some custom url when kubernetes ingress controller authentication failed with url specified in nginx.ingress.kubernetes.io/auth-url annotations?
example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
creationTimestamp: 2016-10-03T13:50:35Z
Thanks.
Your requirement to:
redirect the request to some custom url when kubernetes ingress controller authentication failed
can be addressed by using a following annotation:
nginx.ingress.kubernetes.io/auth-signin: https://SOME_URL/
Explanation
Assuming that you have an Ingress manifest (only the part of it):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: http://httpbin.default.svc.cluster.local:80/basic-auth/user/password
nginx.ingress.kubernetes.io/auth-signin: https://SOME_URL/
spec:
<-- REDACTED -->
I've tested a setup like this and it will work as following:
You will be first redirected to internal Service (httpbin) by auth-url annotation, where you will be prompted for credentials.
If you fail above authentication step, you will be redirected to the auth-signin URL which for example could be: https://github.com/login (this is only an example, and not the actual login configuration).
Additional resources:
Kubernetes.github.io: Ingress nginx: User guide: Nginx configuration: Annotations
Httpbin.org

Nginx Ingress returns 413 Entity Too Large

on my Cluster, I'm trying to upload a big file but when I try, I get the 413 Error
error parsing HTTP 413 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.19.3</center>\r\n</body>\r\n</html>\r\n"
I know that this is caused by a default parameter of nginx and I need to override it. On the documentation I've found that this can be done using two ways:
using annotation on ingress config
using a configMap
I have tried both ways with no result.
Here are my yaml:
ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "700m"
name: nginx-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: nginx-service
servicePort: 80
path: /
and configmap.yml:
apiVersion: v1
data:
proxy-body-size: "800m"
kind: ConfigMap
metadata:
name: ingress-nginx-controller
labels:
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/part-of: nginx-ingress
For future searchs, It works for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
name: docker-registry
namespace: docker-registry
spec:
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size
for node.js you might set
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ limit: '50mb' }));
I was with the same problem using Nginx Ingress on GKE.
These are the annotations that worked for me:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/client-max-body-size: "0"
nginx.org/proxy-connect-timeout: 600s
nginx.org/proxy-read-timeout: 600s
Don't forget to put the correct values for you.
For more details you can see these docs:
Using Annotations
Client Max Body Size
PS I've installed my "Nginx Ingress Controller" following this tutorial.
There are some good answers here already to avoid that 413.
As for example editing the Ingress (better redeploying) with the following annotations:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
Furthermore, for NGINX an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size reference.
There are few things to have in mind while setting this up:
It is better to recreate the Ingress object rather than edit it to make sure that the configuration will be loaded correctly. Delete the Ingress and recreate it again with the proper annotations.
If that's still not working you can try to use the ConfigMap approach, for example:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
proxy-body-size: "8m"
Notice that setting size to 0 disables checking of client request body size.
Remember to set the value greater than the size of data that you are trying to push.
Use the proper apiVersion based on your Kubernetes version. Notice that:
The extensions/v1beta1 and networking.k8s.io/v1beta1 API versions
of Ingress will no longer be served in v1.22.
Migrate manifests and API clients to use the networking.k8s.io/v1 API version, available since v1.19.
All existing persisted objects are accessible via the new API
Below configurations worked out for me:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cs-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 16m
reference: https://github.com/kubernetes/ingress-nginx/issues/4825
Maybe my experience will help somebody. All my settings were Ok in nginx, but nginx stood behind CloudFlare in Proxy mode. So try to set it to "DNS only" in order to make sure there is nothing between you and nginx.

Unable to use cert-manager and nginx ingress controller with SSL termination

I am trying out nginx-ingress on GKE with SSL termination for use cases. I've traveled to millions of blogs on this process which uses cert-manager with nginx ingress controller but none of them worked in my case.
This certainly means I am doing something wrong. But I am not sure what. Here's what I did:
Create sample app exposed on ClusterIP
Deploy nginx-ingress
Create issuer
Create nginx ingress with issuer.
Result:
After describing the nginx ingress, the events areas shows none. That means everything is completely blank. Not a single thing happened for requesting certs, http validation, etc.
nginx ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: 'true'
spec:
rules:
-
host: wptls.ml
http: {paths: [{path: /, backend: {serviceName: web, servicePort: 80}}]}
tls:
-
secretName: tls-staging-cert
hosts: [wptls.ml]
clusterissuer.yml:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: 'https://acme-staging-v02.api.letsencrypt.org/directory'
email: xyz#gmail.com
privateKeySecretRef:
name: letsencrypt-sec-staging
http01: {}
I am not sure if there's anything else which needs to be done.
Try Ingress extra annotation likes
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

GKE Ingress Basic Authentication (ingress.kubernetes.io/auth-type)

I'm trying to get a GKE ingress to require basic auth like this example from github.
The ingress works fine. It routes to the service. But the authentication isn't working. Allows all traffic right through. Has GKE not rolled this feature out yet? Something obviously wrong in my specs?
Here's the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: super-ingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
rules:
- host: zzz.host.com
http:
paths:
- backend:
serviceName: super-service
servicePort: 9000
path: /*
And the basic-auth secret:
$ kubectl get secret/basic-auth -o yaml
apiVersion: v1
data:
auth: XXXXXXXXXXXXXXXXXXX
kind: Secret
metadata:
creationTimestamp: 2016-10-03T21:21:52Z
name: basic-auth
namespace: default
resourceVersion: "XXXXX"
selfLink: /api/v1/namespaces/default/secrets/basic-auth
uid: XXXXXXXXXXX
type: Opaque
Any insights are greatly appreciated!
The example you linked to is for nginx ingress controller. GKE uses GLBC, which doesn't support auth.
You can deploy an nginx ingress controller in your gke cluster. Note that you need to annotate your ingress to avoid the GLBC claiming the ingress. Then you can expose the nginx controller directly, or create a glbc ingress to redirect traffic to the nginx ingress (see this snippet written by bprashanh).