Keycloak behind reverse proxy mixes internal and external addresses - docker-compose

I'm trying to setup a keycloak instance behind a reverse proxy with nginx and I almost did it.
My (partial) docker-compose:
version: '3.4'
services:
[...]
keycloak:
image: jboss/keycloak
environment:
- DB_VENDOR=[vendor]
- DB_USER=[user]
- DB_PASSWORD=[password]
- DB_ADDR=[dbaddr]
- DB_DATABASE=[dbname]
- KEYCLOAK_USER=[adminuser]
- KEYCLOAK_PASSWORD=[adminpassword]
- KEYCLOAK_IMPORT=/tmp/my-realm.json
- KEYCLOAK_FRONTEND_URL=https://auth.mydomain.blah/auth
- PROXY_ADDRESS_FORWARDING=true
- REDIRECT_SOCKET=proxy-https
[...]
my nginx conf is just
server {
listen 443 ssl;
server_name auth.mydomain.blah;
ssl_certificate /etc/letsencrypt/live/auth.mydomain.blah/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/auth.mydomain.blah/privkey.pem;
location / {
proxy_pass http://keycloak:8080;
}
}
and it works, I can access keycloak from https://auth.mydomain.blah/auth BUT when I look at https://auth.mydomain.blah/auth/realms/campi/.well-known/openid-configuration I get this:
{
"issuer": "https://auth.mydomain.blah/auth/realms/campi",
"authorization_endpoint": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/auth",
"token_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/token",
"introspection_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/token/introspect",
"userinfo_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/userinfo",
"end_session_endpoint": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/logout",
"jwks_uri": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/certs",
"check_session_iframe": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/login-status-iframe.html",
[...]
why does keycloak mix internal and external uris? what am I missing?

https://www.keycloak.org/docs/latest/server_installation/index.html#_setting-up-a-load-balancer-or-proxy
Your reverse proxy/nginx is not forwarding host headers properly, so Keycloak has no idea which host/protocol has been used for the request and it using backend/internal host name. You need to set a few proxy_set_header lines:
server {
listen 443 ssl;
server_name auth.mydomain.blah;
ssl_certificate /etc/letsencrypt/live/auth.mydomain.blah/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/auth.mydomain.blah/privkey.pem;
location / {
proxy_pass http://keycloak:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

We have the same "issue" at my company.
Internally access it via keycloak-admin.internaldomain.com but externally for our normal users they hit keycloak.externaldomain.com.
If I load the .well-known/openid-configuration url internally it has the internal address, but loading it using the external url it has that one.
It hasn't caused any issues at all for us, other than explaining it occasionally to an engineer that sees the difference. Otherwise though, no issues.
It appears keycloak just uses whatever domain it is being accessed with.

Related

Error 'net::ERR_CERT_COMMON_NAME_INVALID' with Nginx and Webpack-Dev-Server

I have the following problem. I try to create a .NET Core Web-API and a Vue-SPA frontend.
For this purpose I´m using Nginx as a server on my windows machine.
For this .NET generate a SSL-certificate for me. This was generated in the certmgr in the folder own certificates.
To generate the certificate for Nginx I exported this generated file and follow this guid to transform it.
https://blog.knoldus.com/easiest-way-to-setup-ssl-on-nginx-using-pfx-files/
So I got my cert.crt and cert.rsa. Both off them I am using for Ngnix. Everything work fine for the Web-API. But on the frontend i alway get the following error
sockjs.js?9be2:1605 GET https://192.168.178.145:8081/sockjs-node/info?t=1552754433422 net::ERR_CERT_COMMON_NAME_INVALID
So the webpack-dev-server have trouble to connect. But I dont figure out why. What I am doing wrong in this case?
This is my Nginx config
server {
listen 58000 ssl;
server_name localhost;
ssl_certificate domain.crt;
ssl_certificate_key domain.rsa;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
#proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass https://localhost:8081/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /api {
proxy_pass https://127.0.0.1:5001;
}
}
And in my webpack-dev-server I am using the following
module.exports = {
devServer: {
https: {
key: fs.readFileSync('path\\domain.rsa'),
cert: fs.readFileSync('path\\domain.crt'),
}
}
}
So it´s for me it´s look like that there are some issues with the certificate. But why is it only failing on the websockets and not on the rest?
Sorry I´m new to SSL and Nginx.
sockjs.js?9be2:1605 GET https://192.168.178.145:8081/sockjs-node/info?t=1552754433422 net::ERR_CERT_COMMON_NAME_INVALID
This shows that you are using 192.168.178.145 as the hostname
server_name localhost;
This shows that nginx expects localhost as hostname
ssl_certificate domain.crt;
It is unclear what the really name you have configured in the certificate but it is very likely that it is not 192.168.178.145. Thus the error.
listen 58000 ssl;
And this shows that your nginx is listening on port 5800. The URL from the error message is instead for port 8081. This might indicate that the nginx configuration you show has nothing to do with the server which gets actually accessed - I have no idea what server is on port 8081.

Running Swift Perfect and NGINX

So I have a Swift server-side app running on my Ubuntu box, it's using the Perfect Framework and runs on port 8080.
I want NGINX to forward requests on port 80 to port 8080 (behind the scenes)
My config is
server {
listen 80;
server_name burf.co;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
} }
I have done similar things with VueJS, Node etc but for what am I missing here?
Is it a Perfect issue?
When I go to 127.0.0.1:8080 the page renders fine

Mixed HTTP / HTTPS content with Gitlab behind a reverse proxy

I'm running sameersbn/docker behind a nginx reverse proxy, but I get a mixed content warning on the avatar image.
No matter what I try, it is either the mixed content warning or a too many redirects error page.
I forward my http traffic automatically in nginx with this config:
server
{
listen 80;
server_name myserver.com;
return 301 https://myserver.com$request_uri;
}
server
{
listen 443 ssl;
server_name myserver.com;
ssl on;
ssl_certificate myserver.cert.combined;
ssl_certificate_key myserver.key;
include nginx_php.conf;
location / {
proxy_set_header X-Forwarded-Proto: https;
proxy_set_header X-Forwarded-Ssl: on;
proxy_pass http://127.0.0.1:10080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
include /etc/nginx/webapps.ssl.conf;
}
As soon as I run my gitlab container with the following options, I get the too many redirects
- GITLAB_HTTPS=true
- SSL_SELF_SIGNED=false
- GITLAB_HOST=myserver.com
- GITLAB_PORT=443
- GITLAB_SSH_PORT=22
What am I doing wrong here?
And why does Gitlab try to load the avatar over http?

how do i redirect https to http on nginx load balancer

I am using a nginx load balancer and i want all of my requests to redirect from https to http.
Here is how the configuration for the load balancer looks like -
upstream web_app_backend {
ip_hash;
server app1.example.com;
server app2.example.com;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
return 302 http://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://web_app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
As it turns out, my 443 port was blocked by the firewall. The nginx configuration is no problem.

How to deploy puma with nginx

How to deploy puma with nginx or apache, is this possible or it's not necessary to use a web server like nginx or apache. What is the best way to deploy an app with puma?
The key is in the nginx conf for the site.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:4000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
As you can see proxy_pass http://localhost:4000; line tells nginx to listen to localhost at port 4000 you can change that to your needs.
This is a small change to work with ssl letsencrypt, of course you should configure ssl with letsencrypt.
server {
listen 80;
server_name example.com;
location / {
return 301 https://example.com$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
#listen [::]:443 ssl http2 ipv6only=on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
include /etc/nginx/snippets/ssl.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:4000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
As Puma is not designed to be accessed by users directly, we will use Nginx as a reverse proxy that will buffer requests and responses between users and your Rails application. Puma uses threads, in addition to worker processes, to make more use of available CPU. Nginx and communication between puma is made through socket:
Source Image : http://codeonhill.com
If you need explanation of how to deploy applications with Puma and Nginx checks this