Keycloak redirecting to Hostname but with port number too - redirect

I have configured nginx and given hostname to keycloak as http://keycloak.formsflow.ai for localhost:8080, but as I see in redirection url it show port number 8080, how can I remove it?
Keycloak showing port number in redirection along with hostname
Below is my docker config for keycloak
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
volumes:
- ./configuration/imports:/opt/jboss/keycloak/imports
command:
- "-b 0.0.0.0 -bmanagement=0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/formsflow-ai-realm.json -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=keycloak-db
- KEYCLOAK_HOSTNAME=keycloak.formsflow.ai
- DB_DATABASE=${KEYCLOAK_JDBC_DB:-keycloak}
- DB_USER=${KEYCLOAK_JDBC_USER:-admin}
- DB_PASSWORD=${KEYCLOAK_JDBC_PASSWORD:-changeme}
- KEYCLOAK_USER=${KEYCLOAK_ADMIN_USERNAME:-admin}
- KEYCLOAK_PASSWORD=${KEYCLOAK_ADMIN_PASSWORD:-changeme}
ports:
- 8080:8080
config to set for keycloak to remove port number from redirection url

When behind a reverse proxy, configure Keycloak properties:
PROXY_ADDRESS_FORWARDING=true
KEYCLOAK_FRONTEND_URL=http://keycloak.formsflow.ai/auth
You may also need to configure headers X-Forwarded-Proto and X-Forwarded-Host in Nginx.

Related

Attempt to rewrite minimal Traefik example to use TLS does not work

The minimal example from https://doc.traefik.io/traefik/user-guides/docker-compose/basic-example/ works on my local machine. However, when I try to adapt this to use TLS I run into an issue. I'm a Traefik newbie, so I might be doing a stupid mistake.
This is my attempt:
version: "3.3"
services:
traefik:
image: "traefik:v2.8"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "traefik/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
So the major modification is to use "traefik.http.routers.whoami.entrypoints=websecure" instead of "traefik.http.routers.whoami.entrypoints=web"
Running
$ curl -k https://127.0.0.1
I get
404 page not found
The traefik log shows no routing related issues and the internal traefik setup for routing etc shown using curl https://127.0.0.1:8080/api/rawdata | jq . looks the same as the one of the working example, except the changed port.
So I opted for new answer instead of just editing the old answer. (Reason being even incorrect answers teach something).
My reference is this great post by Marc Mogdanz (link: https://marcmogdanz.de/posts/infrastructure-with-traefik-and-cloudflare/).
The direct answer to your query is:
Expose port 8080 but do not publish it
Add a host name rule. This will allow Traefik to route a URL request to its own port 8080.
The affected part of the compose file would be as follows (assuming that the URL https://dashboard.example.com is the desired URL to reach the dashboard):
expose:
- 8080
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`dashboard.example.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
Finally, I noticed you are testing on localhost. If you are testing on a local machine, use localhost for the dashboard and keep 127.0.0.1 for whoami.
Or, alternately, add a static entry for a subdomain (see https://stackoverflow.com/a/19016600).
Either way, Traefik is looking at the SNI requested - not necessarily the IP address - when matching the Host rule.
Request ----> Docker:443 ---> {Traefik}-"SNI?"---"127.0.0.1"---> {whoami}
| \
| \
8080<---"dashboard.localhost"
Add the following entry to your Traefik:
"--entrypoints.websecure.address=:8080"
Normally it would be 8080 for http and 8443 for https alternative ports, but since your example specifically states https://~:8080, I have adapted it accordingly.

load ssl certificate in haproxy with docker-compose

Hi I have haproxy in docker-compose as below -
haproxy:
image: haproxy:2.3
depends_on:
- my-service
volumes:
- ./config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./ssl:/usr/local/etc/ssl:ro
ports:
- 80:80
haproxy.config
frontend https
bind *:443 ssl crt /usr/local/etc/ssl/cert1.pem
but when I am doing docker-compose up -d always I am getting
unable to stat SSL certificate from file '/etc/ssl/cert1.pem' : No such file or directory.
I do not understand how to pass certificate to haproxy or what am I missing here. Can someone help on this.
I have ssl directory in my local from which I am moving certificate to /usr/local/etc/ssl in container

Using https with grafana/caddy on docker compose

I'm trying to understand how to implement https with grafana/caddy in docker compose without a domain name.
Currently, I access grafana via http://xx.xxx.xx.xx:3000/
I would like this to be https, but am struggling to understand how to generate the cert and have it work as expected. I think letsencrypt requires a domain which I don't have.
version: "3"
networks:
monitor-net:
driver: bridge
volumes:
grafana_data: {}
services:
grafana:
image: grafana/grafana:8.4.4
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: caddy:2.3.0
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${GF_ADMIN_USER}
- ADMIN_PASSWORD=${GF_ADMIN_PASS}
- ADMIN_PASSWORD_HASH=${ADMIN_PASS_HASH}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I'm assuming I would create a volume on /etc/caddy/certs where I'd store the certificates, but don't know how to generate it for IP only or how it gets recognized by caddy.
Caddy for IP with SSL
By default, Caddy serves all sites over HTTPS.
Caddy serves IP addresses and local/internal hostnames over HTTPS using self-signed certificates that are automatically trusted locally (if permitted).
Examples: localhost, 127.0.0.1
Offical Docs Here
in your Caddyfile you have to add something like this
http://192.168.1.25:3000 {
reverse_proxy grafana_ip:3000
}
It looks like Caddy does not support generating HTTPS certificates for IP addresses. Additionally, Let's Encrypt does not currently support issuing certificates for bare IP addresses.
However, it does appear that ZeroSSL supports generating certificates for IPs. You could try using these instructions to change one or all of your sites to use ZeroSSL, but I wasn't able to get this to work on my test server.
The best option is probably to get a domain that you can point at your server, and then serve it from there.

Bad Gateway with Traefik and Docker Compose

I'm trying to deploy a React + FastApi + Postgres application on docker compose with Traefik as the reverse proxy. I'm running into issues with Bad Gateway errors. Running my FastAPI locally runs it on port 8888 and exposes the path /docs to view the api documentation. I'd like to eventually have the application running on example.local with the docs available on example.local/api/docs. My docker-compose.yaml is as follows (loosely based on this one):
version: '3.8'
services:
proxy:
image: traefik:v2.4
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- '80:80'
- '8080:8080'
- '443:443'
command:
- --providers.docker
- --api.insecure=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=web
- --entrypoints.web.address=:80
labels:
- traefik.enable=true
- traefik.http.routers.example-proxy-http.rule=Host(`example.local`)
- traefik.http.routers.example-proxy-http.entrypoints=web
- traefik.http.services.example-proxy.loadbalancer.server.port=80
backend:
build:
context: ./backend
dockerfile: Dockerfile
command: python app/main.py
volumes:
- ./backend/app:/app
env_file:
- .env
networks:
- web
- backend
labels:
- traefik.enable=true
- traefik.http.routers.example-backend-http.rule=PathPrefix(`api/docs`)
- traefik.http.routers.example-backend-http.entrypoints=web
- traefik.http.services.example-backend.loadbalancer.server.port=8888
networks:
web:
external: true
backend:
external: false
I've added 127.0.0.1 example.local to my /etc/hosts file.
From reading around it seems like Bad Gateway errors tend to occur from traefik and related services not being on the same network, or traefik routing traffic to the wrong port on the service container. However if I set ports: - '8888:8888' in my backend service I can access the docs from localhost:8888/docs so I'm pretty sure 8888 is the correct port for the backend loadbalancer. From what I can see traefik and the backend service are on the same network too and I've set it as the default traefik network with --providers.docker.network=web. Interestingly if I visit localhost/api/docs in my browser I'm served up a page from FastAPI. So it could be an issue with my traefik http router labels? I'm quite new to traefik and proxies so would appreciate any help or guidance, thanks!
UPDATE
If I specify the host for the backend by adding
- traefik.http.routers.infilmation-backend-http.rule=Host(`example.local`) && PathPrefix(`/docs`)
to the backend service labels, then visiting example.local/docs does serve up page from FastApi. So I guess my question would be what is the best way of setting up a host for this application? Is there a way I can specify a default host for all services then any PathPrefix rules would be in relation to that host?

Debugging Traefik when the Site Cannot Be Reached from outside Company's Intranet

Using docker-compose I have deployed a web application that uses Traefik as the reverse proxy, listening on port 80. This works without problem when I'm inside my company's intranet. Outside of the intranet, however, I get a 'site cannot be reached' response. Pinging the address from outside shows that the address is reachable and port 80 is open.
I've also tried to use segments in my Traefik configuration to route both the internal and external hostname I have been provided but this has no effect:
version: "3.5"
services:
test:
image: emilevauge/whoami
deploy:
labels:
traefik.enable: "true"
traefik.foo.frontend.rule: "Host:${HOSTNAME};PathPrefixStrip:/test"
traefik.bar.frontend.rule: "Host:${EXTERNAL_HOSTNAME};PathPrefixStrip:/test"
traefik.port: 80
networks:
- frontend
...
I have configured the access logs to see if my requests are reaching Traefik, can anyone advise me what I should be looking for and how to filter the huge amount of text produced to find it? This is my Traefik setup configuration:
version: '3.5'
services:
traefik:
image: traefik:alpine
command: |-
--entryPoints="Name:http Address::80"
--entryPoints="Name:https Address::443 TLS"
--defaultentrypoints="http,https"
--acme
--acme.acmelogging="true"
--acme.domains="${HOSTNAME}"
--acme.domains="${EXTERNAL_HOSTNAME}"
--acme.email="${ACME_EMAIL}"
--acme.entrypoint="https"
--acme.httpchallenge
--acme.httpchallenge.entrypoint="http"
--acme.storage="/opt/traefik/acme.json"
--acme.onhostrule="true"
--docker
--docker.swarmmode
--docker.domain="${HOSTNAME}"
--docker.network="frontend"
--docker.watch
--api
--api.statistics
--logLevel="DEBUG"
networks:
- frontend