Explain CORS in Kubernetes context - kubernetes

The following configuration has been taken out from here:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: cors-example
spec:
virtualhost:
fqdn: www.example.com
corsPolicy:
allowCredentials: true
allowOrigin:
- "*" # allows any origin
allowMethods:
- GET
- POST
- OPTIONS
allowHeaders:
- authorization
- cache-control
exposeHeaders:
- Content-Length
- Content-Range
maxAge: "10m" # preflight requests can be cached for 10 minutes.
routes:
- conditions:
- prefix: /
services:
- name: cors-example
port: 80
My understanding is that entrance to the cluster is allowed only through www.example.com. Any other external url won't even hit the HTTPProxy.
Hence, I really do not get the role of corsPolicy. What exactly does? What does allows any origin mean? The only origin HTTPProxy allows, is www.example.com. Correct?
In general, are there any CORS restrictions inside K8s cluster (pod to pod)? My understanding again is no.
P.S. Please do not explain what CORS is. I know very well.This is not my question

I guess your overconfidence in knowing what CORS means is clouding your reasoning. Lets imagine the following scenario:
You are hosting a REST API at www.example.com
I am a developer of www.somewebsite.com and I want to use your API.
My website tries to fetch data from www.example.com.
The above policy will tell the browser to allow me to fetch your data since I will get a response from your server with a header that allowed origins are *.
If you don't include this configuration, the browser will not let me consume your API since the domain of my website [www.somewebsite.com] is not allowed to call this API and is not the same with the one where you host the API.
You see I am still trying to fetch data from your domain, www.example.com, and the HTTP Proxy will hit your pods, but the browser is the one that will prevent me from getting the data, unless you have the above configuration.

Related

How to pass extra http headers to Okteto pod?

I've deployed the Duende IdentityServer to Okteto Cloud: https://id6-jeff-tian.cloud.okteto.net/.
Although the endpoint is https from the outside, the inside pods still think they are behind HTTP protocol. You can check the discovery endpoint to find out: https://id6-jeff-tian.cloud.okteto.net/.well-known/openid-configuration
That causes issues during some redirecting. So how to let the inner pods know that they are hosted in https scheme?
Can we pass some headers to the IdP to tell it the original https schema?
These headers should be forwarded to the inner pods:
X-Forwarded-For: Holds information about the client that initiated the request and subsequent proxies in a chain of proxies. This parameter may contain IP addresses and, optionally, port numbers.
X-Forwarded-Proto: The value of the original scheme, should be https in this case.
X-Forwarded-Host: The original value of the Host header field.
I searched from some aspnet documentations and found this: https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?source=recommendations&view=aspnetcore-6.0, however, I don't know how to configure the headers in Okteto, or in any k8s cluster.
Is there anyone who can shed some light here?
My ingress configurations is as follows (https://github.com/Jeff-Tian/IdentityServer/blob/main/k8s/app/ingress.yaml):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: id6
annotations:
dev.okteto.com/generate-host: id6
spec:
rules:
- http:
paths:
- backend:
service:
name: id6
port:
number: 80
path: /
pathType: ImplementationSpecific
The headers that you mention are being added to the request when it’s forwarded to your pods.
Could you dump the headers on the receiving end?
Not familiar with Duende, but does it have a setting to specify the “public URL”? That’s typically what I’ve done in the past for similar setups.

How to replicate some traffic on Kubernetes and divert it to the Service for investigation

I'm using istio and I know that I can define weights in Virtual Service and divert traffic to different services.
My question is: how to amplify some of the traffic and direct the amplified traffic to the validation service? This amplified traffic will not go back to the original source but will be closed within the cluster. In other words, it does not bother the user.
I'm not even sure if there is an ecosystem, feature or application that provides this kind of mechanism. I don't even know if there is an ecosystem or application that provides such a mechanism and I don't know what it is called, so I'm having trouble finding it.
Thanks.
OP found a soultion by themselves, in the comments, hence the CW.
In this scenario, the best solution is to use Istio Mirroring - also called shadowing.
When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow. For example, cluster-1 becomes cluster-1-shadow.
Also, it is important to note that these requests are mirrored as “fire and forget”, which means that the responses are discarded.
To create mirroring rule, you have to create VirtualService with mirror and mirrorPercentage fields.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercentage:
value: 100.0
This route rule sends 100% of the traffic to v1. The last stanza specifies that you want to mirror (i.e., also send) 100% of the same traffic to the httpbin:v2 service.
[source]

How do I forward headers to different services in Kubernetes (Istio)

I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.
$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?
Edit:
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get
{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}
Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
should the token be propagated to backend-1 and backend-2 without any other changes?
Yes, policy should transfer token to both backend-1 and backend-2
There is a github issue , where users had same issue like You
A few informations from there:
The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth
Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security
And another problem with that in stackoverflow
where accepted answer is:
The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).
From istio documentation
For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.
To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.
If a service has no matching policies, both transport authentication and origin authentication are disabled.
Istio supports header propagation. Probably didn't support when this thread was created.
You can allow the original header to be forwarded by using forwardOriginalToken: true in JWTRules or forward a valid JWT payload using outputPayloadToHeader in JWTRules.
Reference: ISTIO JWTRule documentation

Can we set priority for the middlewares in traefik v2?

Using the v1.7.9 in kubernetes I'm facing this issue:
if I set a rate limit (traefik.ingress.kubernetes.io/rate-limit) and custom response headers (traefik.ingress.kubernetes.io/custom-response-headers) then when a request gets rate limited, the custom headers won't be set. I guess it's because of some ordering/priority among these plugins. And I totally agree that reaching the rate-limit should return the response as soon as is possible, but it would be nice, if we could modify the priorities if we need.
The question therefore is: will we be able to set priorities for the middlewares?
I couldn't find any clue of it in the docs nor among the github issues.
Concrete use-case:
I want CORS-policy headers to always be set, even if the rate-limiting kicked in. I want this because my SPA won't get the response object otherwise, because the browser won't allow it:
Access to XMLHttpRequest at 'https://api.example.com/api/v1/resource' from origin 'https://cors.exmaple.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
In this case it would be a fine solution if i could just set the priority of the headers middleware higher than the rate limit middleware.
For future reference, a working example that demonstrates such an ordering is here:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: ratelimit
spec:
rateLimit:
average: 100
burst: 50
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: response-header
spec:
headers:
customResponseHeaders:
X-Custom-Response-Header: "value"
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
spec:
# more fields...
routes:
# more fields...
middlewares: # the middlewares will be called in this order
- name: response-header
- name: ratelimit
I asked the same question on the Containous' community forum: https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
1. Try while testing on you local machine, replaced localhost with your local IP. You had to achieve CORS by the following line of code request.withCredentials = true; where request is the instance of XMLHttpRequest. CORS headers has to be added to the backend server to allow cross origin access.
2. You could just write your own script which will be responsible for executing rate limit middleware after headers middleware.
In the v2, the middlewares can ordered in the order you want, you can put the same type of middleware several times with different configurations on the same route.
https://docs.traefik.io/v2.0/middlewares/overview/

Path based routing issues Traefik as Ingress Controller

I'm running through what looks like a configuration issue! I am using traefik as ingress controller within kubernetes and I have an ingress to route some URLs to route some frontends to various backends. Let's say I have something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: ReplacePathRegex
spec:
rules:
- host: foo.io
http:
paths:
- path: /api/authservice/(.*) /$1
backend:
serviceName: auth
servicePort: 8901
- path: /api/svcXXX/v1/files/cover/(.*) /v1/files/cover/$1
backend:
serviceName: files
servicePort: 8183
- path: /api/svcXXX/v1/files/image/(.*) /v1/files/image/$1
backend:
serviceName: files
servicePort: 8183
Using Postman (or any other client), if I POST a request on http://foo.io/api/authservice/auth/oauth/token, while looking in the access logs, it seems that it is routed to http://foo.io/api/svcXXX/v1/files/image/(.*) /v1/files/image/$1. I'm seeing this in the access logs:
[03/Jul/2018:12:57:17 +0000] "POST /api/authservice/auth/oauth/token HTTP/1.1" 401 102 "-" "PostmanRuntime/7.1.5" 15 "foo.io/api/svcXXX/v1/files/image/(.*) /v1/files/image/$1" 37ms
Am I doing something wrong ?
Note: since the documentation is changed, I've updated the links, but content on the documentation pages would be different.
ReplacePathRegex is a modifier rule. According to documentation:
Modifier rules only modify the request. They do not have any impact on routing decisions being made.
Following is the list of existing modifier rules:
AddPrefix: /products: Add path prefix to the existing request path prior to forwarding the request to the backend.
ReplacePath: /serverless-path: Replaces the path and adds the old path to the X-Replaced-Path header. Useful for mapping to AWS Lambda or Google Cloud Functions.
ReplacePathRegex: ^/api/v2/(.*) /api/$1: Replaces the path with a regular expression and adds the old path to the X-Replaced-Path header. Separate the regular expression and the replacement by a space.
To route requests, you should use matchers:
Matcher rules determine if a particular request should be forwarded to a backend.
Separate multiple rule values by , (comma) in order to enable ANY semantics (i.e., forward a request if any rule matches). Does not work for Headers and HeadersRegexp.
Separate multiple rule values by ; (semicolon) in order to enable ALL semantics (i.e., forward a request if all rules match).
###Path Matcher Usage Guidelines
This section explains when to use the various path matchers.
Use Path if your backend listens on the exact path only. For instance,
Path: /products would match /products but not /products/shoes.
Use a *Prefix* matcher if your backend listens on a particular base
path but also serves requests on sub-paths. For instance, PathPrefix: /products would match /products but also /products/shoes and
/products/shirts. Since the path is forwarded as-is, your backend is
expected to listen on /products.
Use a *Strip matcher if your backend listens on the root path (/) but
should be routable on a specific prefix. For instance,
PathPrefixStrip: /products would match /products but also
/products/shoes and /products/shirts. Since the path is stripped prior
to forwarding, your backend is expected to listen on /. If your
backend is serving assets (e.g., images or Javascript files), chances
are it must return properly constructed relative URLs. Continuing on
the example, the backend should return /products/shoes/image.png (and
not /images.png which Traefik would likely not be able to associate
with the same backend). The X-Forwarded-Prefix header (available since
Traefik 1.3) can be queried to build such URLs dynamically.
Instead of distinguishing your backends by path only, you can add a
Host matcher to the mix. That way, namespacing of your backends
happens on the basis of hosts in addition to paths.
Full list of matchers and their descriptions can be found here