CORS rules nginx-ingress rules - kubernetes

I need to allow requests from multiple origins: http://localhost:4200, http://localhost:4242, etc., on nginx-ingress version 1.7.1. But I'm not able to do that for multiple origins, because nginx.ingress.kubernetes.io/cors-allow-credentials: true will not work with nginx.ingress.kubernetes.io/cors-allow-origin: "*". It causes the browser to generate CORS error. Maybe someone has a solution for avoiding this error?
this is my config
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
Access to XMLHttpRequest at 'https://stage.site.com/api/session' from
origin 'http://localhost:4200' has been blocked by CORS policy: The
value of the 'Access-Control-Allow-Origin' header in the response must
not be the wildcard '*' when the request's credentials mode is
'include'. The credentials mode of requests initiated by the
XMLHttpRequest is controlled by the withCredentials attribute.

Add the annotation to enable CORS:
nginx.ingress.kubernetes.io/enable-cors: "true"
Be aware that the string "*" cannot be used for a resource that supports credentials (https://www.w3.org/TR/cors/#resource-requests), try with your domain list (comma separated) instead of *

You can create a second Ingress, with a different domain and cors origin, directing to the same destination. Not the best solution but it works.
Or:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Access-Control-Allow-Origin: $http_origin";
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST,
OPTIONS, DELETE, PATCH
nginx.ingress.kubernetes.io/enable-cors: "true"
But attention $http_origin is allowing every origin!

This is a fairly requested feature: https://github.com/kubernetes/ingress-nginx/issues/5496
As a current workaround you can use the following snippet to define more than one domain for CORS: https://github.com/kubernetes/ingress-nginx/issues/5496#issuecomment-662798662
A PR has already been submitted and waits for completion. So this should roll out natively during one of the coming releases: https://github.com/kubernetes/ingress-nginx/pull/7134

Updated 2023:
You can now add multiple origins as a comma separated value in cors-allow-origin
Example:
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com, https://another.com, http://localhost:8000"
Source: Cors Allow Multiple Origin

You can add following to your config to match your two origins against http_origin which ingress received in header and return add *-allow-origin only if pattern has been matched:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_origin ~* "^http://localhost\:(4200|4242)$") {
add_header Access-Control-Allow-Origin "$http_origin";
}

As Nicola Ben answered above, but this work for me: (Add the annotation to enable CORS)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mybackend-ingress
namespace: my-backend-namespace
annotations:
nginx.ingress.kubernetes.io/cors-allow-headers: Content-Type, authorization
nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS
nginx.ingress.kubernetes.io/cors-allow-origin: https://backend.your.url
nginx.ingress.kubernetes.io/enable-cors: 'true'

Related

Kubernetes Nest CORS Issue

I am running a react project on Kubernetes and I am having both frontend and backend as services in Kubernetes. The backend is running nestjs and graphql.
When I tried to spin it up online by using the following url https://my_backend.mydomain everything went as expected, though I realized that with this architecture the backend will be publicly available.
So I decided to use the internal url of Kubernetes such as http://my_service:my_port, and I got an error: Blocked loading mixed active content. I realized it was for http instead of https, re-configured the url and set up the connection for https://my-service:my_port. Though now I get a: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource.
My ingress file looks like the following:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
nginx.ingress.kubernetes.io/cors-allow-origin: "$http_origin"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Content-Security-Policy: https://cdn.jsdelivr.net:* https://*.rsms.me:* https://rsms.me:* always";
more_set_headers "Access-Control-Allow-Origin: *";
spec:
...
Am I mistaken somewhere? I could not find that much regarding Kubernets+NestJs+CORS. I keep ending up in documentation that says to add "Access-Control-Allow-Origin: *". But apparently that does not much with my configuration.

Does a duplicated Ambassador's mapping takes down Kubernetes services?

I'm trying to deploy a cloned website in a second namespace. But I forgot to change the URL for Ambassador's mapping resource. So both clones are the same URL https://mywebsite.dev which supposed to be https://mywebsite.dev and https://testing.mywebsite.dev. The main website was taken down soon after I kubectl apply the minor website. Both sites are offline now. Basically that means I ran the mapping.yaml twice in different namespaces.
Is there any chance that duplicated mapping causes the error? How to fix that?
This is the yaml file:
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: mywebsite
spec:
cors:
credentials: true
headers: x-csrf-token,Content-Type,Authorization
methods: POST, PATCH, GET, OPTIONS, PUT, DELETE
origins:
- https://mywebsite.dev
host: mywebsite.dev
load_balancer:
cookie:
name: stickyname
policy: ring_hash
prefix: /
resolver: endpoint
service: http://my-website-service.default
timeout_ms: 60000
It seems an error from Ambassador 0.86 when it has a duplicated/bad record of mapping, then got stuck there, doesn't accept any new mapping record. I fixed it by delete ambassador pods then let the deployment recreate them. It works fine later.

double hsts header for kubernetes nginx

app1: hsts enabled at backend
app2: hsts not enabled
I am trying to enable hsts for specific domain at nginx-ingress(https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/)
however, I have one application in that cluster already have hsts enabled while the another not.
So, if i add it at the config map it will take effect for both the service which will cause double hsts header for app 1.
I am currently enabling the hsts for all like as below:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config-map
namespace: default
data:
http2: "true"
ssl-redirect: "true"
ssl-protocols: TLSv1.2 TLSv1.3
ssl-prefer-server-ciphers: "true"
ssl-ciphers: #########
set-real-ip-from: 0.0.0.0/0
real-ip-header: X-Forwarded-For
# hsts enabled
server-snippets: 'add_header Strict-Transport-Security "max-age=31536000; includeSubDomains;"'
not sure if I am working in the correct track to resolve the hsts, looking forward to hearing from others. :)
updates
i came across where I am able to perform an if else.. just wondering if there is anyway that i can differentiate by my virtual server?
https://github.com/nginxinc/kubernetes-ingress/blob/3f0740d182a9f46cfc83ef085d9721cb102c97b9/examples/customization/nginx-config.yaml#L43

Can we set priority for the middlewares in traefik v2?

Using the v1.7.9 in kubernetes I'm facing this issue:
if I set a rate limit (traefik.ingress.kubernetes.io/rate-limit) and custom response headers (traefik.ingress.kubernetes.io/custom-response-headers) then when a request gets rate limited, the custom headers won't be set. I guess it's because of some ordering/priority among these plugins. And I totally agree that reaching the rate-limit should return the response as soon as is possible, but it would be nice, if we could modify the priorities if we need.
The question therefore is: will we be able to set priorities for the middlewares?
I couldn't find any clue of it in the docs nor among the github issues.
Concrete use-case:
I want CORS-policy headers to always be set, even if the rate-limiting kicked in. I want this because my SPA won't get the response object otherwise, because the browser won't allow it:
Access to XMLHttpRequest at 'https://api.example.com/api/v1/resource' from origin 'https://cors.exmaple.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
In this case it would be a fine solution if i could just set the priority of the headers middleware higher than the rate limit middleware.
For future reference, a working example that demonstrates such an ordering is here:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: ratelimit
spec:
rateLimit:
average: 100
burst: 50
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: response-header
spec:
headers:
customResponseHeaders:
X-Custom-Response-Header: "value"
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
spec:
# more fields...
routes:
# more fields...
middlewares: # the middlewares will be called in this order
- name: response-header
- name: ratelimit
I asked the same question on the Containous' community forum: https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
1. Try while testing on you local machine, replaced localhost with your local IP. You had to achieve CORS by the following line of code request.withCredentials = true; where request is the instance of XMLHttpRequest. CORS headers has to be added to the backend server to allow cross origin access.
2. You could just write your own script which will be responsible for executing rate limit middleware after headers middleware.
In the v2, the middlewares can ordered in the order you want, you can put the same type of middleware several times with different configurations on the same route.
https://docs.traefik.io/v2.0/middlewares/overview/

mutual TLS based on specific IP

I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443