KeyCloak - ingress does not allow connecting over https service - keycloak

I have installed keycloak using helm.
A Traefik ingress is created to allow access from public
After the admin password is created from localhost:8080, i am able to login into admin console only when i am port forwarded and local access.
When i use the public url and click on admin console, it redirects to https://website/auth/admin/master/console/ and shows a blank page.
I found the problem but when i change the servicePort: https inside ingress, i get an internal server error
status code 500.
when i use http port,i get these errors:
Mixed Content: The page at 'https://url/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://url/auth/js/keycloak.js?version=mxda6'. This request has been blocked; the content must be served over HTTPS.
Mixed Content: The page at 'https://url.ca/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://url/auth/js/keycloak.js?version=mxda6'. This request has been blocked; the content must be served over HTTPS.
i looked through traefik logs:
level=debug msg="'500 Internal Server Error' caused by: x509: cannot validate certificate for x.x.x.x because it doesn't contain any IP SANs"

I found a fix but it still doesn't answer my question why when ingress points to https, it doesnt work. Is there an answer?
So the fix is to add this under ENV in the statefulset keycloak deployment.
In the ingress, the service port is http
- name: PROXY_ADDRESS_FORWARDING
value: "true"
i found it at https://github.com/eclipse/che/issues/9429

I had the same issue. The white screen isn't helpful, but the browser console is. It is blocking mixed content, namely the script http://url/auth/js/keycloak.js?version=mxda6.
The documentation on Docker Hub says:
Specify frontend base URL
To set a fixed base URL for frontend requests use the following environment value (this is highly recommended in production):
KEYCLOAK_FRONTEND_URL: Specify base URL for Keycloak (optional, default is retrieved from request)
I provided the external url with https scheme in my manifest and the script in question is now appearing in the index.html as https url.
- name: KEYCLOAK_FRONTEND_URL
value: "https://url/auth"
Since it is "highly recommended" I suppose there are more slight problems without this variable set, like other links being generated wrong, e.g. in emails though I didn't check that yet.

Related

Keycloak behind a Load Balancer with SSL gives a "Mixed Content" error

I have set up Keycloak (docker container) on the GCP Compute Engine (VM). After setting the sslRequired=none, I'm able to access Keycloak over a public IP (e.g. http://33.44.55.66:8080) and manage the realm.
I have configured the GCP CLassic (HTTPS) Load Balancer and added two front-ends as described below. The Load Balancer forwards the request to the Keycloak instance on the VM.
HTTP: http://55.44.33.22/keycloak
HTTPS: https://my-domain.com/keycloak
In the browser, the HTTP URL works fine and I'm able to login to Keycloak and manage the realm. However, for the HTTPS URL, I get the below error
Mixed Content: The page at 'https://my-domain.com/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://my-domain.com/auth/js/keycloak.js?version=gyc8p'. This request has been blocked; the content must be served over HTTPS.
Note: I tried this suggestion, but it didn't work
Can anyone help with this, please?
I would never expose Keycloak on plain http protocol. Keyclok admin console itself is secured via OIDC protocol and OIDC requires to use https protocol. So default sslRequired=EXTERNAL is safe and smart configuration option from the vendor.
SSL offloading must be configured properly:
Keycloak container with PROXY_ADDRESS_FORWARDING=true
loadbalancer/reverse proxy (nginx, GCP Classic Load Balancer, AWS ALB, ...) with proper request header X-Forwarded-* configuration, so Keycloak container will know correct protocol, domain which is used for the users

HTTPS requests for GKE Ingress ERR_TIMEDOUT

I have a microservice architecture (implemented in Spring Boot) deployed in Google Kubernetes Engine. For this microservice architecture I have setup the following:
domain: comanddev.tk (free domain from Freenom)
a certificate for this domain
the following Ingress config:
The problem is that when I invoke an URL that I know it should be working https://comanddev.tk/customer-service/actuator/health, the response I get is ERR_TIMEDOUT. I checked Ingress Controller and I don't receive any request in the ingress although URL forwarding is set.
Update: I tried to set a "glue record" like in the following picture and the response I get is that the certificate is not valid (i have certificate for comanddev.tk not dev.comanddev.tk) and I get 401 after agreeing to access unsecure url.
I've digged a bit into this.
As I mentioned when you $ curl -IL http://comanddev.tk/customer-service/actuator/health you will received nginx ingress response.
As domain intercepts the request and redirect to the destination server I am not sure if there is point to use TLS.
I would suggest you to use nameserver instead of URL Forwarding, just use IP of your Ingress. In this option you would redirect request to your Ingress. When you are using Port Forwarding you are using Freenom redirection and I am not sure how its handled on their side.

keycloak/louketo gatekeeper -- doesn't automatically redirect to keycloak login

I am setting up gatekeeper/louketo as a reverse proxy for a browser app. I have the proxy deployed as a sidecar in a kubernetes pod, with keycloak elsewhere in the same cluster (but accessed by a public URL). Gatekeeper is behind an nginx ingress, which does tls termination.
[I have tried both the most current louketo version and also the fork oneconcern/keycloak-gatekeeper. Some differences, but the issue is the same, so I think its a problem in my configuration.]
Gatekeeper, no matter how I set up the config, reads the discovery url of my realm, but then doesn't redirect on login there. Rather it redirects to my upstream app, using the /oauth/authorize path. I can manually force my app to redirect again to keycloak, but on return from keycloak, gatekeeper doesn't recognize the cookie, and catches me in a redirect loop.
It would seem I am making some simple config error, but I've been working on this for two days, and am at my wit's ends. (Even hacked in extra debugging into the go code, but haven't studied it enough to really know what it is doing.)
My config (best guess of many different variants tried):
- --config=/var/secrets/auth-proxy-keycloak-config.yaml
- --discovery-url=https://auth.my-domain.com/auth/realms/my-realm
- --listen=:4000
- --upstream-url=http://127.0.0.1:3000
- --redirection-url=https://dev.my-domain.com/
- --enable-refresh-tokens=true
- --enable-default-deny=true
- --resources=uri=/*|roles=developer
- --resources=uri=/about|white-listed=true
- --resources=uri=/oauth/*|white-listed=true
The ingress serves https://dev.my-domain.com and routes to port 4000, which is the auth proxy sidecar. It is setup with a lets-encrypt certificate, and terminates tls. I don't use tls in the proxy (should I?). Upstream app at port 3000. Keycloak is at auth.my-domain.com. In auth-proxy-keycloak-config.yaml I have encryption key, and client_id. The keycloak client is setup for public access and standard flow (hence no client_secret needed, I presume). I have fiddled with the various uri settings, and also put in web origins "*" for CORS for testing.
When I try a protected url in the browser, I see:
no session found in request, redirecting for authorization {"error": "authentication session not found"}
in the proxy logs, and it redirects me to /oauth/authorize, not to https://auth.my-domain.com/auth/realms/my-realm/protocol/openid-connect/auth where I think it should redirect me.
UPDATE -- as #jan-garaj noted in comment to answer, /oauth/* shouldn't have been whitelisted. (I got that from a possibly mistaken interpretation of someone else's answer.) I then had to make the cookie not http-only, and finally hit on this issue - Keycloak-gatekeeper: 'aud' claim and 'client_id' do not match ... after that it works!
From the Louketo-proxy doc:
/oauth/authorize is authentication endpoint which will generate the OpenID redirect to the provider
So that redirect is correct. It is louketo-proxy endpoint. It is not request for your app, it will be processed by louketo-proxy. It will generate another redirect to your IDP, where user needs to login.
Off topic:
you really need confidential client and client secret for authorization code flow
web origins "*" for CORS is correct only for http protocol, explicit origin specification is needed for https

Static webpage redirect http to https using Google loadbalancer

I'm trying to implement URL redirects from http to https as described by [https://cloud.google.com/load-balancing/docs/https/setting-up-traffic-management][1] but I'm getting ERR_TOO_MANY_REDIRECTS
I have a storage bucket with a very simple HTML page.
I have an external HTTP load balancer in front of it. Static IP address. SSL cert. I managed to connect everything so that both http and https requests for the site load the contents of the bucket.
I tried to add the HTTP redirect as per the document:
Changes 'Host and path rules' from 'Simple' to 'Advanced...'.
The default route still points to the bucket
I added a new route. Host is 'www.example.com. The default path rule points to the bucket. The second path rule matches /* and does a prefix/HTTPS redirect as described in the above link.
Once the config is saved, either http or https requests to www.example.com results in ERR_TOO_MANY_REDIRECTS
What am I doing wrong? Really appreciate any help you can provide.
[Backend configuration][2]
[Frontend configuration][3]
[Host and path rules][4]
[Redirect path rule][5]
[1]: https://cloud.google.com/load-balancing/docs/https/setting-up-traffic-management
[2]: https://i.stack.imgur.com/lkhUF.png
[3]: https://i.stack.imgur.com/FYst0.png
[4]: https://i.stack.imgur.com/zsTOX.png
[5]: https://i.stack.imgur.com/2tEDE.png
FYI - someone in Google Groups pointed out that I needed 2 load balancers. 1 to terminate the HTTPS traffic and the second to redirect the HTTP traffic. Works like a charm.

how to let KONG follow the 301 redirect?

I found an issue in a new KONG installation (v0.11.2).
When an upstream api return HTTP 301, the KONG will pass this to consumer side instead of following the redirect in the internal process. screenshot
Please advise how to let KONG follow the 301 redirect (as an expected reverse proxy behavior)?
I found a workaround instead of having to deploy a separate web server, if you register DNS through GoDaddy, you can enable their "Domain Forward & Mask" features. What I was trying to enable is a redirect from root domain to www.domain and enforce https.
https://www.godaddy.com/help/manually-forward-or-mask-your-domain-or-subdomain-422
Since DNS was pointed from GoDaddy nameservers, they update the A record pointing to their own configurable proxy, and I was able to enter the redirect URL. Perhaps other domain registrars offer this feature, and it can avoid "hacky" configuration of Kong.
Attempts with Kong
I enabled the request transform plugin for an API and tried overriding the header host and forwarded values to no avail
I tried variations on the downstream url as well
If you want to redirect only for specific upstream APIs, you can use the pre-function plugin.
name: my-redirect
kind: KongPlugin
namespace: my-namespace
global: "false"
plugin: pre-function
config:
access:
- |
if ngx.var.uri == "/product/1" then
local forwarded_host = "new-domain.com"
ngx.header["Location"] = "https://" .. forwarded_host .. ngx.var.request_uri
return kong.response.exit(301)
end
This will redirect traffic from let's say old-domain.com/product/1 to new-domain.com/product/1