Is keycloak behind api gateway a good practice? - keycloak

What are good arguments in favor to use or not to use Keycloak behind Api gateway (Kong)?

There is a tradeoff to putting it behind the proxy: you will not be able to easily protect all of your services by applying the OIDC plugin on the global level. Instead, you will need to individually configure every service with its own OIDC plugin. This is because you will need at least one service that is not protected by the OIDC plugin so that user-agents can authenticate through that service. Unless you're planning to implement some other form of security on that service or need some other services that Kong can easily implement as requests pass through it, I don't see the point of putting Keycloak behind the proxy. That's not to say there aren't good reasons to do it, I'm just not aware of them.
I've set Keycloak up outside of the proxy, and have had good results. Here's what it looks like:
I'm writing a blog post about this set up now which I will release next week. I will try to remember to update my answer here when it is complete.
Edit
Links to blog:
Part 1,
Part 2

It is not good practice, in fact I would suggest to put it outside, in the DMZ. In this way that IDP can be leveraged by all APIs that you want to publish and authenticate using the API gateway. This is an example of applying such authentication flow with Keycloak: https://www.slideshare.net/YuichiNakamura10/implementing-security-requirements-for-banking-api-system-using-open-source-software-oss
Your concern might be then: how do I protect such a critical resource like an IDP authenticating all my services?
Reasonable concern which you can address by:
ensuring autoscaling of the IDP based on authentication request
configuring all the needed threat mitigation options on Keycloak (https://www.keycloak.org/docs/latest/server_admin/#password-guess-brute-force-attacks)
add a WAF in front of the IDP with feature such as DDOS prevention and Intelligent Threat Mitigation based on traffic patterns
IP or Domain whitelisting, if you know where all your customers are connecting from
restrict port exposure for the IDP

Kong is an API gateway that'll be in the "hot path" - in the request and response cycle of every API request. Kong is good at efficiently proxying lots of requests at very low latency.
Keycloak and other IAM offerings can integrate with Kong - but they aren't placed in the hot path. Keycloak is good at managing users and permissions and providing this information to systems like Kong, when requested.
Perhaps these links will be helpful https://github.com/ncarlier/kong-integration-samples and https://ncarlier.gitbooks.io/oss-api-management/content/howto-kong_with_keycloak.html

Is not a good practice, a good Enterprise API Gateway has the obligation to meet (or give you the access to customize) all the advanced authentication and authorization standards available in KEYCLOAK.
But in some circumstances, If you already have a API Gateway with a lot API´s configured (with transformation rules, route rules) and this Gateway can´t provide advanced features for authentication and authorization (ex. 2 factor authentication or Oauth2 authorization code/openId / SAML) and you need more security ASAP, go ahead while looking for a gateway that best meets your needs

Related

Caddy webserver: JWT OIDC e-mail whitelisting

Currently, I am trying to find out if the caddy webserver is able to implement some kind of e-mail whitelisting, based on the JWT token. Before reaching the frontend, which is served by caddy, an OIDC login is done by the user, provided by an separate OAuth2 service. I've read the caddy security documentation about authorization (https://authp.github.io/docs/intro) but did not find a satisfying answer. Has anyone done something similar?
I am also open to other approaches for how to secure a frontend with a whitelisting logic of any kind.
Kind regards

How to integrate existing Auth Service with kibana and opendistro for authenticating users

We have our own authentication server developed in NodeJs, which acts as identity provider for users. So We are looking for how we can integrate it with Kibana-opendistro.
The security responsibility lies with the security plugin, so most of the configuration should be made there.
Opendistro Security provides support for a couple of authentication backends that you can refer here https://opensearch.org/docs/latest/security-plugin/configuration/configuration/. You can configure the security plugin based on the authentication mechanism used.
Alternatively, there is this concept of injected user where the authentication is completely handled by another service fronting the security plugin. Though I did not find documentation on this, you can refer to the code here https://github.com/opensearch-project/security/blob/565f47e804ec03aeeba02ca8def563b91307fcc7/src/test/java/org/opensearch/security/test/plugin/UserInjectorPlugin.java

How to create authentication with Kubernetes when service is already existing?

I'm reading through https://kubernetes.io/docs/reference/access-authn-authz/authentication/, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.
I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.
The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.
However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.
There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (link) or JWT based authentication (link)

OAuth2.0 Auth Server and IAM

I'm building a microservice based REST API and a native SPA Web Frontend for an application.
The API should be protected using OAuth2.0 to allow for other clients in the future. It should use the Authorization Code Flow ideally with Proof Key for Code Exchange (PKCE)
As I understand it I need to run my own OAuth Auth Server that's managing the API Clients and generating access tokens, etc.
Also I need my own Authentication/IAM service with it's own fronted for user login and client authorization granting. This service is the place the users login credentials are ultimately checked against a backend. That last part should be flexible and the backend might be an LDAP server in some private cloud deployment.
These components (Auth Server and IAM servicve) are outside of the OAuth scope but appear, correct me if I'm wrong, to be required if I'm running my own API for my own users.
However creating these services myself appears to be more work than I appreciate besides the obvious security risks involved.
I read about auth0 and okta but I'm not sure if they are suited for my use case with the application potentially deployed in private cloud.
I also thought about running Hydra (OAuth Server) and Kratos (IAM) by ory but I'm not sure if this is adding too many dependencys to my project.
Isn't there an easy way to secure an API with OAuth that deals with the Auth Server and the IAM that's good for small projects?!

SAML request authentication with Kong

we are using konghq as an API gateway for one of our customers but we are very new to it and therefore don't know how to tackle this authentication issue.
We have to authenticate our services with a SAML token. Our micro services are behind kong which is running on an EC2. The authentication process should be an independent micro services which validates the token from the request and it’s contents against another system. Instead of a service it could also be some serverless function on a k8s cluster. We don't want to use a lambda to stay cloud agnostic.
We were previously using AWS API gateway and lambda authorizers to tackle the scenario. The authorizer validated the token and took care of the authentication process.
I searched all the Kong forums and google but couldn't find a SAML plugin. Most similar is the JWT plugin but it won't work for us.
Is there something similar in Kong or is there development on Kong involved? If yes, are there any existing plugin which are similar or any tutorials related.
All help is greatly appreciated.
Thanks
Oldfighter