Can we set priority for the middlewares in traefik v2? - kubernetes

Using the v1.7.9 in kubernetes I'm facing this issue:
if I set a rate limit (traefik.ingress.kubernetes.io/rate-limit) and custom response headers (traefik.ingress.kubernetes.io/custom-response-headers) then when a request gets rate limited, the custom headers won't be set. I guess it's because of some ordering/priority among these plugins. And I totally agree that reaching the rate-limit should return the response as soon as is possible, but it would be nice, if we could modify the priorities if we need.
The question therefore is: will we be able to set priorities for the middlewares?
I couldn't find any clue of it in the docs nor among the github issues.
Concrete use-case:
I want CORS-policy headers to always be set, even if the rate-limiting kicked in. I want this because my SPA won't get the response object otherwise, because the browser won't allow it:
Access to XMLHttpRequest at 'https://api.example.com/api/v1/resource' from origin 'https://cors.exmaple.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
In this case it would be a fine solution if i could just set the priority of the headers middleware higher than the rate limit middleware.

For future reference, a working example that demonstrates such an ordering is here:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: ratelimit
spec:
rateLimit:
average: 100
burst: 50
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: response-header
spec:
headers:
customResponseHeaders:
X-Custom-Response-Header: "value"
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
spec:
# more fields...
routes:
# more fields...
middlewares: # the middlewares will be called in this order
- name: response-header
- name: ratelimit
I asked the same question on the Containous' community forum: https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326

Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
1. Try while testing on you local machine, replaced localhost with your local IP. You had to achieve CORS by the following line of code request.withCredentials = true; where request is the instance of XMLHttpRequest. CORS headers has to be added to the backend server to allow cross origin access.
2. You could just write your own script which will be responsible for executing rate limit middleware after headers middleware.

In the v2, the middlewares can ordered in the order you want, you can put the same type of middleware several times with different configurations on the same route.
https://docs.traefik.io/v2.0/middlewares/overview/

Related

How to reflect http method to keycloak resource when using ambassador filter

I'm trying to integrate the ambassador and keycloak, so all my microservices behind the ambassador could be protected by keycloak.
Now I can implement an easy case, by setting the filter + filter policy, say my resource: GET /products/:productId , if the user want to visit this page, ambassador will intercept it and redirect to keycloak login page, the filter policy settings look like:
apiVersion: getambassador.io/v2
kind: FilterPolicy
metadata:
name: keycloak-filter-policy
namespace: ambassador
spec:
rules:
- host: "*"
path: /product/:productId
filters:
- name: keycloak-filter
namespace: ambassador
arguments:
scopes:
My question is, how could I define policy like: POST /product/:productId ? On Keycloak, I have resource + policies such as: product:view product:edit how can I translate these resources to Ambassador's filter policies?
To directly answer your question, currently, you cannot add the HTTP method to the FilterPolicy. There is a workaround if you need to define more granular access control based on what you are trying to do with the resource.
For example, if you are using HTTP2 or HTTP3 you can get the method from the request headers. There is a pseudo-header called :method
Link for HTTP spec: https://httpwg.org/specs/rfc7540.html#HttpRequest
Link for Ambassador's Filters Doc: https://www.getambassador.io/docs/edge-stack/latest/topics/using/filters/

Explain CORS in Kubernetes context

The following configuration has been taken out from here:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: cors-example
spec:
virtualhost:
fqdn: www.example.com
corsPolicy:
allowCredentials: true
allowOrigin:
- "*" # allows any origin
allowMethods:
- GET
- POST
- OPTIONS
allowHeaders:
- authorization
- cache-control
exposeHeaders:
- Content-Length
- Content-Range
maxAge: "10m" # preflight requests can be cached for 10 minutes.
routes:
- conditions:
- prefix: /
services:
- name: cors-example
port: 80
My understanding is that entrance to the cluster is allowed only through www.example.com. Any other external url won't even hit the HTTPProxy.
Hence, I really do not get the role of corsPolicy. What exactly does? What does allows any origin mean? The only origin HTTPProxy allows, is www.example.com. Correct?
In general, are there any CORS restrictions inside K8s cluster (pod to pod)? My understanding again is no.
P.S. Please do not explain what CORS is. I know very well.This is not my question
I guess your overconfidence in knowing what CORS means is clouding your reasoning. Lets imagine the following scenario:
You are hosting a REST API at www.example.com
I am a developer of www.somewebsite.com and I want to use your API.
My website tries to fetch data from www.example.com.
The above policy will tell the browser to allow me to fetch your data since I will get a response from your server with a header that allowed origins are *.
If you don't include this configuration, the browser will not let me consume your API since the domain of my website [www.somewebsite.com] is not allowed to call this API and is not the same with the one where you host the API.
You see I am still trying to fetch data from your domain, www.example.com, and the HTTP Proxy will hit your pods, but the browser is the one that will prevent me from getting the data, unless you have the above configuration.

How to replicate some traffic on Kubernetes and divert it to the Service for investigation

I'm using istio and I know that I can define weights in Virtual Service and divert traffic to different services.
My question is: how to amplify some of the traffic and direct the amplified traffic to the validation service? This amplified traffic will not go back to the original source but will be closed within the cluster. In other words, it does not bother the user.
I'm not even sure if there is an ecosystem, feature or application that provides this kind of mechanism. I don't even know if there is an ecosystem or application that provides such a mechanism and I don't know what it is called, so I'm having trouble finding it.
Thanks.
OP found a soultion by themselves, in the comments, hence the CW.
In this scenario, the best solution is to use Istio Mirroring - also called shadowing.
When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow. For example, cluster-1 becomes cluster-1-shadow.
Also, it is important to note that these requests are mirrored as “fire and forget”, which means that the responses are discarded.
To create mirroring rule, you have to create VirtualService with mirror and mirrorPercentage fields.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- httpbin
http:
- route:
- destination:
host: httpbin
subset: v1
weight: 100
mirror:
host: httpbin
subset: v2
mirrorPercentage:
value: 100.0
This route rule sends 100% of the traffic to v1. The last stanza specifies that you want to mirror (i.e., also send) 100% of the same traffic to the httpbin:v2 service.
[source]

Does a duplicated Ambassador's mapping takes down Kubernetes services?

I'm trying to deploy a cloned website in a second namespace. But I forgot to change the URL for Ambassador's mapping resource. So both clones are the same URL https://mywebsite.dev which supposed to be https://mywebsite.dev and https://testing.mywebsite.dev. The main website was taken down soon after I kubectl apply the minor website. Both sites are offline now. Basically that means I ran the mapping.yaml twice in different namespaces.
Is there any chance that duplicated mapping causes the error? How to fix that?
This is the yaml file:
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: mywebsite
spec:
cors:
credentials: true
headers: x-csrf-token,Content-Type,Authorization
methods: POST, PATCH, GET, OPTIONS, PUT, DELETE
origins:
- https://mywebsite.dev
host: mywebsite.dev
load_balancer:
cookie:
name: stickyname
policy: ring_hash
prefix: /
resolver: endpoint
service: http://my-website-service.default
timeout_ms: 60000
It seems an error from Ambassador 0.86 when it has a duplicated/bad record of mapping, then got stuck there, doesn't accept any new mapping record. I fixed it by delete ambassador pods then let the deployment recreate them. It works fine later.

How do I forward headers to different services in Kubernetes (Istio)

I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.
$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?
Edit:
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get
{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}
Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
should the token be propagated to backend-1 and backend-2 without any other changes?
Yes, policy should transfer token to both backend-1 and backend-2
There is a github issue , where users had same issue like You
A few informations from there:
The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth
Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security
And another problem with that in stackoverflow
where accepted answer is:
The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).
From istio documentation
For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.
To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.
If a service has no matching policies, both transport authentication and origin authentication are disabled.
Istio supports header propagation. Probably didn't support when this thread was created.
You can allow the original header to be forwarded by using forwardOriginalToken: true in JWTRules or forward a valid JWT payload using outputPayloadToHeader in JWTRules.
Reference: ISTIO JWTRule documentation