How to reflect http method to keycloak resource when using ambassador filter - keycloak

I'm trying to integrate the ambassador and keycloak, so all my microservices behind the ambassador could be protected by keycloak.
Now I can implement an easy case, by setting the filter + filter policy, say my resource: GET /products/:productId , if the user want to visit this page, ambassador will intercept it and redirect to keycloak login page, the filter policy settings look like:
apiVersion: getambassador.io/v2
kind: FilterPolicy
metadata:
name: keycloak-filter-policy
namespace: ambassador
spec:
rules:
- host: "*"
path: /product/:productId
filters:
- name: keycloak-filter
namespace: ambassador
arguments:
scopes:
My question is, how could I define policy like: POST /product/:productId ? On Keycloak, I have resource + policies such as: product:view product:edit how can I translate these resources to Ambassador's filter policies?

To directly answer your question, currently, you cannot add the HTTP method to the FilterPolicy. There is a workaround if you need to define more granular access control based on what you are trying to do with the resource.
For example, if you are using HTTP2 or HTTP3 you can get the method from the request headers. There is a pseudo-header called :method
Link for HTTP spec: https://httpwg.org/specs/rfc7540.html#HttpRequest
Link for Ambassador's Filters Doc: https://www.getambassador.io/docs/edge-stack/latest/topics/using/filters/

Related

searching annotations within kubernetes controllers

I was trying to find annotations to do some basic auth in Nginx controller.
Most of the resources on the internet specify this annotation:
"nginx.ingress.kuberenetes.io/..."
While I found that in the nginx docs:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/
It was switched to "nginx.org"
Searching external docs for answers seems a bit of a detour.
Is there a way to browse what annotations are supported on a contoller with local commands, maybe something similar to kubectl explain?
Any ideas?
you can use kubectl explain ingress, but it will just show documentation about Kubernetes resources about ingress, something like
KIND: Ingress
VERSION: networking.k8s.io/v1
DESCRIPTION:
Ingress is a collection of rules that allow inbound connections to reach
the endpoints defined by a backend. An Ingress can be configured to give
services externally-reachable urls, load balance traffic, terminate SSL,
offer name based virtual hosting etc.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Spec is the desired state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Status is the current state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
In short, for annotation better to check helm-chart and official documentation
This how you can add basic auth to the ingress
Frist, create basic auth secret
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
then refer to this secret in the annotation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required to access Linkerd'
name: linker-basicauth-ingress
namespace: linkerd-viz
spec:
tls:
- hosts:
- mybasic-auth.example.com
secretName: ingres-tls-secret
rules:
- host: mybasic-auth.example.com
http:
paths:
- backend:
service:
name: web
port:
number: 8084
path: /
pathType: Prefix
nginx-configuration/annotations.md

How to point a CNAME to an OpenShift route

My application that is hosted on OpenShift has a route with a URL that looks like this:
https://books-student-book-reservation-backend-project.apps.amarige.hostname.us
I want to give end users a URL that looks like this: https://breeze.us. First it hides the OpenShift URL structure, second it is easier to remember. Bottom line it's more user friendly.
The challenge is that when I redirect breeze.us to the OpenShift route, I get "Application is not available" error from OpenShift.
Any suggestion on how to resolve this?
https://docs.openshift.com/online/pro/dev_guide/routes.html#custom-route-and-hosts-and-certificates-restrictions
If you are using OpenShift Online
In OpenShift Online Starter, custom hostname is not permitted. You can either buy OpenShift Online Pro (which allows custom hostname to be set), or use a reverse proxy to redirect your traffic (from another server with the custom hostname) to OpenShift.
If you are using self-deployed OKD
You can set a custom hostname for your route like this:
# A example unsecured route with custom hostname
apiVersion: v1
kind: Route
metadata:
name: route-unsecured
spec:
host: www.your-custom-hostname.com # here
to:
kind: Service
name: service-name
If you need to serve multiple routes under the same hostname, you can also do path-based route with custom hostname:
# A example unsecured path-based route with custom hostname
apiVersion: v1
kind: Route
metadata:
name: route-unsecured
spec:
host: www.your-custom-hostname.com
path: "/test" # here
to:
kind: Service
name: service-name
So that you can use www.your-custom-hostname.com/test to access your route.

Istio AuthorizationPolicy rules questions

I’ve been testing istio (1.6) authorization policies and would like to confirm the following:
Can I use k8s service names as shown below where httpbin.bar is the service name for deployment/workload httpbin:
- to:
- operation:
hosts: ["httpbin.bar"]
I have the following rule; only ALLOW access to the httpbin.bar service from service account sleep in foo namespace.
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
- to:
- operation:
hosts: ["httpbin.bar"]
I setup 2 services; httpbin.bar and privatehttpbin.bar. My assumption was that it would block access to privatehttpbin.bar but this is not the case. On a side note, I deliberately avoided adding selector.matchLabels because as far as I can tell the rule should only succeed for httpbin.bar.
The docs state:
A match occurs when at least one source, operation and condition matches the request.
as per here.
I interpreted that AND logic will apply to the source and operation.
Would appreciate if I can find out why this may not be working or if my understanding needs to be corrected.
With your AuthorizationPolicy object, you have two rules in the namespace bar:
Allow any request coming from foo namespace; with service account sleep to any service.
Allow any request to httpbin service; from any namespace, with any service account.
So it is an OR, you are applying.
If you want and AND to be applied; meaning allow any request from the namespace foo with service account sleep to talk to the service httpbin, in the namespace bar, you need to apply the following rule:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
to: # <- remove the dash (-) from here
- operation:
hosts: ["httpbin.bar"]
On the first point You can specify the host name by k8s service name.Therefore httpbin.bar is acceptable for the host field.
On the second point,
As per here ,
Authorization Policy scope (target) is determined by
“metadata/namespace” and an optional “selector”.
“metadata/namespace” tells which namespace the policy applies. If set
to root namespace, the policy applies to all namespaces in a mesh.
So the authorization policy whitelist-httpbin-bar applies to workloads in the namespace foo.But the services httpbin and privatehttpbin you want to authorize lies in bar namespace.So your authorization policy does not restrict access to these services.
If there are no ALLOW policies for the workload, allow the request.
The above criteria makes the request a valid one.
Hope this helps.

Istio: Can I add randomly generated unique value as a header to every request before it reaches my application

I have a RESTful service within a spring boot application. This spring boot app is deployed inside a kubernetes cluser and we have Istio as a service mesh attached to the sidecar of each container pod in the cluster. Every request to my service first hits the service mesh i.e Istio and then gets routed accordingly.
I need to put a validation for a request header and if that header is not present then randomly generate a unique value and set it as a header to the request. I know that there is Headers.HeaderOperations which i can use in the destination rule but how can i generate a unique value every time the header is missing? I dont want to write the logic inside my application as this is a general rule to apply for all the applications inside the cluster
There is important information that needs to be said in this subject. And it looks to me like You are trying to make a workaround tracing for an applications that does not forward/propagate headers in Your cluster. So I am going to mention few problems that can be encountered with this solution (just in case).
As mentioned in answer from Yuri G. You can configure unique x-request-id headers but they will not be very useful in terms of tracing if the requests are passing trough applications that do not propagate those x-request-id headers.
This is because tracing entire request paths needs to have unique x-request-id though out its entire trace. If the x-request-id value is different in various parts of the path the request takes, how are We going to put together the entire trace path?
In a scenario where two requests are received in application pod at the same time even if they had unique x-request-id headers, only application is able to tell which inbound request matches with which outbound connection. One of the requests could take longer to process and without forwarded trace header we can't tell which one is which.
Anyway for applications that do support forwarding/propagating x-request-id headers I suggest following guide from istio documentation.
Hope it helps.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true
From reading the documentation of istio and envoy it seems like this is not supported by istio/envoy out of the box. As a workaround you have 2 options
Option 1: To set the x-envoy-force-trace header in virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- headers:
request:
set:
x-envoy-force-trace: true
It will generate a header x-request-id if it is missing. But it seems like abuse of tracing mechanism.
Option 2: To use consistentHash balancing based on header, e.g:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName:
name: x-custom-request-id
It will generate the header x-custom-request-id for any request that doesn't have this header. In this case the requests with same x-custom-request-id value will go always to the same pod that can cause uneven balancing.
The answer above works well! I have updated it for the latest istio (filter name is in full):
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: enable-envoy-xrequestid-in-response
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
always_set_request_id_in_response: true

Can we set priority for the middlewares in traefik v2?

Using the v1.7.9 in kubernetes I'm facing this issue:
if I set a rate limit (traefik.ingress.kubernetes.io/rate-limit) and custom response headers (traefik.ingress.kubernetes.io/custom-response-headers) then when a request gets rate limited, the custom headers won't be set. I guess it's because of some ordering/priority among these plugins. And I totally agree that reaching the rate-limit should return the response as soon as is possible, but it would be nice, if we could modify the priorities if we need.
The question therefore is: will we be able to set priorities for the middlewares?
I couldn't find any clue of it in the docs nor among the github issues.
Concrete use-case:
I want CORS-policy headers to always be set, even if the rate-limiting kicked in. I want this because my SPA won't get the response object otherwise, because the browser won't allow it:
Access to XMLHttpRequest at 'https://api.example.com/api/v1/resource' from origin 'https://cors.exmaple.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
In this case it would be a fine solution if i could just set the priority of the headers middleware higher than the rate limit middleware.
For future reference, a working example that demonstrates such an ordering is here:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: ratelimit
spec:
rateLimit:
average: 100
burst: 50
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: response-header
spec:
headers:
customResponseHeaders:
X-Custom-Response-Header: "value"
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
spec:
# more fields...
routes:
# more fields...
middlewares: # the middlewares will be called in this order
- name: response-header
- name: ratelimit
I asked the same question on the Containous' community forum: https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326
Regular web pages can use the XMLHttpRequest object to send and receive data from remote servers, but they're limited by the same origin policy. Extensions aren't so limited. An extension can talk to remote servers outside of its origin, as long as it first requests cross-origin permissions.
1. Try while testing on you local machine, replaced localhost with your local IP. You had to achieve CORS by the following line of code request.withCredentials = true; where request is the instance of XMLHttpRequest. CORS headers has to be added to the backend server to allow cross origin access.
2. You could just write your own script which will be responsible for executing rate limit middleware after headers middleware.
In the v2, the middlewares can ordered in the order you want, you can put the same type of middleware several times with different configurations on the same route.
https://docs.traefik.io/v2.0/middlewares/overview/