Mixed HTTP / HTTPS content with Gitlab behind a reverse proxy - redirect

I'm running sameersbn/docker behind a nginx reverse proxy, but I get a mixed content warning on the avatar image.
No matter what I try, it is either the mixed content warning or a too many redirects error page.
I forward my http traffic automatically in nginx with this config:
server
{
listen 80;
server_name myserver.com;
return 301 https://myserver.com$request_uri;
}
server
{
listen 443 ssl;
server_name myserver.com;
ssl on;
ssl_certificate myserver.cert.combined;
ssl_certificate_key myserver.key;
include nginx_php.conf;
location / {
proxy_set_header X-Forwarded-Proto: https;
proxy_set_header X-Forwarded-Ssl: on;
proxy_pass http://127.0.0.1:10080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
include /etc/nginx/webapps.ssl.conf;
}
As soon as I run my gitlab container with the following options, I get the too many redirects
- GITLAB_HTTPS=true
- SSL_SELF_SIGNED=false
- GITLAB_HOST=myserver.com
- GITLAB_PORT=443
- GITLAB_SSH_PORT=22
What am I doing wrong here?
And why does Gitlab try to load the avatar over http?

Related

Keycloak behind reverse proxy mixes internal and external addresses

I'm trying to setup a keycloak instance behind a reverse proxy with nginx and I almost did it.
My (partial) docker-compose:
version: '3.4'
services:
[...]
keycloak:
image: jboss/keycloak
environment:
- DB_VENDOR=[vendor]
- DB_USER=[user]
- DB_PASSWORD=[password]
- DB_ADDR=[dbaddr]
- DB_DATABASE=[dbname]
- KEYCLOAK_USER=[adminuser]
- KEYCLOAK_PASSWORD=[adminpassword]
- KEYCLOAK_IMPORT=/tmp/my-realm.json
- KEYCLOAK_FRONTEND_URL=https://auth.mydomain.blah/auth
- PROXY_ADDRESS_FORWARDING=true
- REDIRECT_SOCKET=proxy-https
[...]
my nginx conf is just
server {
listen 443 ssl;
server_name auth.mydomain.blah;
ssl_certificate /etc/letsencrypt/live/auth.mydomain.blah/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/auth.mydomain.blah/privkey.pem;
location / {
proxy_pass http://keycloak:8080;
}
}
and it works, I can access keycloak from https://auth.mydomain.blah/auth BUT when I look at https://auth.mydomain.blah/auth/realms/campi/.well-known/openid-configuration I get this:
{
"issuer": "https://auth.mydomain.blah/auth/realms/campi",
"authorization_endpoint": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/auth",
"token_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/token",
"introspection_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/token/introspect",
"userinfo_endpoint": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/userinfo",
"end_session_endpoint": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/logout",
"jwks_uri": "http://keycloak:8080/auth/realms/campi/protocol/openid-connect/certs",
"check_session_iframe": "https://auth.mydomain.blah/auth/realms/campi/protocol/openid-connect/login-status-iframe.html",
[...]
why does keycloak mix internal and external uris? what am I missing?
https://www.keycloak.org/docs/latest/server_installation/index.html#_setting-up-a-load-balancer-or-proxy
Your reverse proxy/nginx is not forwarding host headers properly, so Keycloak has no idea which host/protocol has been used for the request and it using backend/internal host name. You need to set a few proxy_set_header lines:
server {
listen 443 ssl;
server_name auth.mydomain.blah;
ssl_certificate /etc/letsencrypt/live/auth.mydomain.blah/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/auth.mydomain.blah/privkey.pem;
location / {
proxy_pass http://keycloak:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
We have the same "issue" at my company.
Internally access it via keycloak-admin.internaldomain.com but externally for our normal users they hit keycloak.externaldomain.com.
If I load the .well-known/openid-configuration url internally it has the internal address, but loading it using the external url it has that one.
It hasn't caused any issues at all for us, other than explaining it occasionally to an engineer that sees the difference. Otherwise though, no issues.
It appears keycloak just uses whatever domain it is being accessed with.

Running Swift Perfect and NGINX

So I have a Swift server-side app running on my Ubuntu box, it's using the Perfect Framework and runs on port 8080.
I want NGINX to forward requests on port 80 to port 8080 (behind the scenes)
My config is
server {
listen 80;
server_name burf.co;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
} }
I have done similar things with VueJS, Node etc but for what am I missing here?
Is it a Perfect issue?
When I go to 127.0.0.1:8080 the page renders fine

how do i redirect https to http on nginx load balancer

I am using a nginx load balancer and i want all of my requests to redirect from https to http.
Here is how the configuration for the load balancer looks like -
upstream web_app_backend {
ip_hash;
server app1.example.com;
server app2.example.com;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
return 302 http://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://web_app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
As it turns out, my 443 port was blocked by the firewall. The nginx configuration is no problem.

Nginx - redirect www to https : ERR_SSL_PROTOCOL_ERROR

I tried to copy the configuration file of my other website but it doesn't work for it, maybe because I have a proxy on it ? I really don't know what's the problem. https://mywebsite.lol is ok (SSL by cloudflare)
server {
server_name www.irc.mywebsite.lol;
rewrite ^(.*) https://irc.mywebsite.lol$1 permanent;
}
server {
# Port
listen 80;
# Hostname
server_name irc.mywebsite.lol;
# Logs (acces et erreurs)
access_log /var/log/nginx/irc.mywebsite.lol.access.log;
error_log /var/log/nginx/irc.mywebsite.lol.error.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
proxy_pass http://localhost:7778/;
proxy_redirect default;
# Websocket support (from version 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
pagespeed off;
}
}
Use HTTPS port, not HTTP. It should be
listen 443;
instead of
listen 80;

How to deploy puma with nginx

How to deploy puma with nginx or apache, is this possible or it's not necessary to use a web server like nginx or apache. What is the best way to deploy an app with puma?
The key is in the nginx conf for the site.
server {
listen 80;
server_name mysite.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:4000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
As you can see proxy_pass http://localhost:4000; line tells nginx to listen to localhost at port 4000 you can change that to your needs.
This is a small change to work with ssl letsencrypt, of course you should configure ssl with letsencrypt.
server {
listen 80;
server_name example.com;
location / {
return 301 https://example.com$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
#listen [::]:443 ssl http2 ipv6only=on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
include /etc/nginx/snippets/ssl.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://localhost:4000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
As Puma is not designed to be accessed by users directly, we will use Nginx as a reverse proxy that will buffer requests and responses between users and your Rails application. Puma uses threads, in addition to worker processes, to make more use of available CPU. Nginx and communication between puma is made through socket:
Source Image : http://codeonhill.com
If you need explanation of how to deploy applications with Puma and Nginx checks this