Keycloak authentication at load balancer level - keycloak

I am working on keycloak authentication and authorization. I want to authenticate user on load balancer level. Is it possible to filter out user before checking for an actual application authentication.

Yes, but you need layer 7 load balancer with OIDC/SAML support, e.g. F5, AWS ALB, ...

Related

Keycloak behind a Load Balancer with SSL gives a "Mixed Content" error

I have set up Keycloak (docker container) on the GCP Compute Engine (VM). After setting the sslRequired=none, I'm able to access Keycloak over a public IP (e.g. http://33.44.55.66:8080) and manage the realm.
I have configured the GCP CLassic (HTTPS) Load Balancer and added two front-ends as described below. The Load Balancer forwards the request to the Keycloak instance on the VM.
HTTP: http://55.44.33.22/keycloak
HTTPS: https://my-domain.com/keycloak
In the browser, the HTTP URL works fine and I'm able to login to Keycloak and manage the realm. However, for the HTTPS URL, I get the below error
Mixed Content: The page at 'https://my-domain.com/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://my-domain.com/auth/js/keycloak.js?version=gyc8p'. This request has been blocked; the content must be served over HTTPS.
Note: I tried this suggestion, but it didn't work
Can anyone help with this, please?
I would never expose Keycloak on plain http protocol. Keyclok admin console itself is secured via OIDC protocol and OIDC requires to use https protocol. So default sslRequired=EXTERNAL is safe and smart configuration option from the vendor.
SSL offloading must be configured properly:
Keycloak container with PROXY_ADDRESS_FORWARDING=true
loadbalancer/reverse proxy (nginx, GCP Classic Load Balancer, AWS ALB, ...) with proper request header X-Forwarded-* configuration, so Keycloak container will know correct protocol, domain which is used for the users

Restricting communication from a service which is consul connect enabled to non consul connect service through intention?

If we have two service for example
Front-end (which is consul-connect enabled)
Back-end (which is not consul-connect enabled).
Is it possible to restrict communication between then through intention. Provided we use Consul-Sync from to moved k8s service into consul catalog. Then back-end which is not consul-connect enabled will show in intention. I tried setting deny between Front-end -> Back-end. If not working Front-end is hitting Back-end. I am missing something Or its like Authorization can only happen between two consul-connect enabled service
This question was recently answered in https://stackoverflow.com/a/68432317/12384224.
Consul intentions are authorization polices that allow you to control access between applications within a service mesh. You must use a sidecar proxy, or natively integrate your application with the mesh, in order to use intentions. They are not applicable if you are only using Consul for service discovery, or your application is not part of the service mesh.

How to create authentication with Kubernetes when service is already existing?

I'm reading through https://kubernetes.io/docs/reference/access-authn-authz/authentication/, but it is not giving any concrete commands and it is mostly focusing when we want to create everything from scratch. It's also explaining auth for engineers using Kubernetes.
I have an existing deployment and service (with exposed external IP) and would like to create the simplest possible authentication (preferably token based) for an external user accessing the exposed IP. I can't add authentication to the services since I don't have access to their code. If somebody could help me with some commands I would be grateful.
The documentation which referred is for authentication with k8s (for api accesses). This is not for application layer authentication.
However I can suggest one way to implement application layer authentication without changing the service at all. You can redirect the traffic to nginx (or any other reverse proxy) which can perform the authentication and redirect the authenticated user to service directly. It can also perform some kind of authorization too.
There are various resources available which can help you choose various authentication mechanism available in nginx such as password file based mechanism (link) or JWT based authentication (link)

How to secure REST APIs in Spring Boot web application?

I have two Spring Boot web applications. Both applications have different databases and different sets of users. Also, both applications use Spring Security for authentication and authorisation which works properly.
At any given point I will have one instance of the first application running and multiple instances of the 2nd web application running.
I want to expose REST APIs from 1st web application (one instance running) and be able to use that REST APIs from 2nd web application (multiple instances running).
How do I make sure that REST APIs can be accessed securely with proper authentication and by instances of the 2nd applications only.
If you could change your security, I would recommend you to use OAUTH2. Basically it generates a token that is used in your APP2 instances to make the API calls.
You can see more here.
https://spring.io/guides/tutorials/spring-boot-oauth2/
http://websystique.com/spring-security/secure-spring-rest-api-using-oauth2/
But if you can't change your APP's security, you can continue using your current schema. In the APP1 you can create an user for the API calls, this user only has access to the API services. In your APP2 you need to store the credentials to access the APP1. Finally you do login into APP1 and invoke the API using HTTP client, you can use Spring RestTemplate or Apache HttpComponents Client.
SSL based authentication could be an option, if you seriously thinking about the security aspects.
Assume that you REST api exposed by App 1 is over HTTPs, then you can configure the App 1 to ask the client to give their SSL/TLS certificate when they try to access this REST API (exposed by App 1).
This will help us identify that the client is indeed a client from app 2.
Two More Cents:
In case if your App 1 REST API calls needs load balancing, NGINX should be your chose. The SSL client certificate based authentication can be offloaded to NGINX and Your Spring boot app no more worry about the SSL related configurations.
The solution we went with was to secure both using an OAuth2 client_credentials workflow. That is the OAuth2 flow where clients request a token on behalf of themselves, not a calling User.
Check out Spring Cloud Security
1) Secure your services using #EnableResourceServer
#SpringBootApplication
#EnableResourceServer
public class Application ...
2) Make calls from one service to another using an OAuth2RestTemplate
Check out Resource Server Token Relay in http://cloud.spring.io/spring-cloud-security/spring-cloud-security.html which will specify how to configure an Oauth2RestTemplate to forward on security context details (token) from one service to another.
3) Service A and Service B should be able to communicate using these techniques if they are configured using the same Oauth2 Client and Secret. This will be configured in the applications' application.properties file, hopefully injected by the environment. Oauth2 Scopes can be used as role identifiers. You could therefore say that only a Client with Scopes (api-read, api-write) should have access to Endpoint A in Service A. This is configurable using Spring Security's Authorization configuration as well as #EnableGlobalMethodSecurity

Restrict access to Kubernetes UI via VPN or other on GKE

GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
Is there a better method for securing access to the UI? It appears to me this should be accessed behind a secure VPN to prevent various types of attacks. If someone could access the Kubernetes UI, they could cause a lot of damage to the cluster.
GKE currently exposes Kubernetes UI publicly and by default is only protected by basic auth.
The UI is running as a Pod in the Kubernetes cluster with a service attached so that it is accessible from inside of the cluster. If you want to access it remotely, you can use the service proxy running in the apiserver, which means that you would authenticate with the apiserver to access the UI.
The apiserver accepts three forms of client authentication: basic auth, bearer token, and client certificate. The basic auth password should have high entropy, and is only transmitted over SSL. It is provided to make access via a browser simpler since OAuth integration does not yet exist (although you should only pass your credentials over the SSL connection if you have verified the server certificate in your web browser so that your credentials aren't stolen by a man in the middle attack).
Is there a better method for securing access to the UI?
There isn't a way to tell GKE to disable the service proxy in the master, but if an attacker had credentials, then they could access your cluster using the API and do as much harm as if they could get to the UI. So I'm not sure why you are particularly concerned with securing the UI via the service proxy vs. securing the apiserver's API endpoint.
It appears to me this should be accessed behind a secure VPN to prevent various types of attacks.
Which types of attacks are you concerned about specifically?