Used docker compose for metabase and postgres images.
Managed to build successfully. Went through the Metabase setup, no hitches.
However, I can't seem to get past the sign in page. Each time I enter the correct username and password I used at the setup stage, it still returns me to the sign in page.
Checking the docker logs, I keep seeing this error on sign in attempts:
2022-09-24 23:24:31,502 DEBUG middleware.log :: GET /api/user/current 401 320.3 µs (0 DB calls)
"Unauthenticated"
Not sure what could be wrong?
Turns out the answer had nothing at all to do with Metabase or Docker, but with my Nginx configuration
Problem was in my location block. Initial settings were:
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://localhost:<some_port>; # <- put correct port
}
Changed them to
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme; # <- *changed to this*
proxy_pass http://localhost:<some_port>; # <- put correct port
proxy_http_version 1.1; # <- *add this*
}
...and the error was gone: can log in now.
Related
I'm trying to redirect the requests between lower and Prod environments, which gives 503 service unavailable error.
It seems the requests is still going to lower environment I've another location block for /api/ which drives the request to lower APIGW where the actual uderlying service doesn't exist.
Having NGINX as proxy server which redirects requests to Microservices hosted on Rancher platform through API Gateway.
PFB location block I've added to redirect the request URI which comes in to NGINX, to Prod APIGW.
location = /testdetail {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Content-Length "";
proxy_set_header Host $host;
proxy_set_header X-Original-URI-NGINX $request_uri;
proxy_set_header X-Forwarded-For-NGINX $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto-NGINX $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Authorization $http_authorization;
proxy_pass_header Authorization;
proxy_pass https://apigateway.xxx.xxx.com/test/***/***/***/***/testdetail;
proxy_pass_request_body off; # no need to send the POST body
}
In addition to it, there is another location block for /api/.
location /api/ {
proxy_http_version 1.1;
proxy_set_header Connection "";
auth_request /validate;
auth_request_set $auth_status $upstream_status;
proxy_set_header AuthStatus $upstream_status;
**proxy_pass https://apigateway-preview.xxx.xxx.com/;**
proxy_pass_request_headers on;
#proxy_connect_timeout 800;
proxy_read_timeout 90;
#proxy_send_timeout 800;
#proxy_redirect off;
I've been working with SignalR on Asp.net core 3 over Nginx to network a Unity3d app. On my local build through Kestral, websockets work great. However, once I proxy my WebApp through nginx, my websocket will work for a single response or not at all, seemly at random. Thoughts?
My current nginx build config:
server {
listen 80;
server_name ip domain.com www.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name ip domain.com www.domain.com;
# SSL Configuration
ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
#make sure to check for more up to date ciphers
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
root /var/www/html;
location / {
proxy_pass http://localhost:5000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep_alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /notifications {
proxy_pass http://localhost:5000/notifications/;
include /etc/nginx/proxy_params;
proxy_http_version 1.1;
set $http_upgrade "websocket";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_buffering off;
proxy_read_timeout 3600;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
}
location ~^/identity {
rewrite ^/identity/login? /login/ break;
proxy_pass http://localhost:5050;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Problem:
After enabling the debug logs for signalr, I discovered that my websocket was being downgraded to HTTP 1.0 which explained why I received responses from the server only once or not all when using websockets.
My application (Unity Game) supports both protocols and if you review my config you will find both a "location /" block and "location /notifications" block configured to HTTP and Websockets, respectively. My application uses http to authenticate and websockets to actually play the game. As such, Nginx persisted the original http proxy headers after authentication through http and then calling the "/notifications" endpoint despite it being setup for websockets (The behavior I expect was a new websocket connection based on my configuration to be created).
Fix:
I changed my "location/ " (my regular http endpoint) block headers to support HTTP 1.1 like so:
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Cleaned up websocket block based on asp.net core documentation:
location /notifications {
proxy_pass http://localhost:5000/notifications/;
# include /etc/nginx/proxy_params;
proxy_http_version 1.1;
# Configure WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
#proxy_cache_bypass $http_upgrade;
proxy_cache off;
# Configure ServerSentEvents
proxy_buffering off;
# Configure LongPolling
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; $
proxy_set_header X-Forwarded-Proto $scheme;
}
After this edit, both my HTTP and websocket endpoints worked as expected. I'm not sure why the headers were persisted into the websocket "location /notifications" block but I'm sure there is some nuanced Nginx documentation on it somewhere.
Hope this helps someone in the future.
So I have a Swift server-side app running on my Ubuntu box, it's using the Perfect Framework and runs on port 8080.
I want NGINX to forward requests on port 80 to port 8080 (behind the scenes)
My config is
server {
listen 80;
server_name burf.co;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
} }
I have done similar things with VueJS, Node etc but for what am I missing here?
Is it a Perfect issue?
When I go to 127.0.0.1:8080 the page renders fine
I am having some troubles with my application. During redirects my flask application lose the https and redirect to http instead.
I've been trying to find the solution but nothing works.
My nginx configuration for the application (location /) is as follows:
proxy_pass http://localhost:5400;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-port 443;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
(Some examples on the internet says to use "X-Forwarded-Proto". I've tried that without success. And also to use "ssl" or "https" as value on that parameters.
A simple print in the flask application (before_request:) shows that it is still http-requests made event though i use https between client and nginx.
print(request.environ["wsgi.url_scheme"])
What am I doing wrong?
If your application ignores the X-Forwarded headers for setting the scheme in http 3xx responses, you could try setting one or more proxy_redirect rules:
proxy_redirect http:// $scheme://;
See this document for details.
Warning. Making unwanted HTTP redirects is a security flaw as in those requests the connection is not encrypted!!
The only solution here is to correctly configure NGINX and GUNICORN to allow Flask to use the correct headers.
NGINX config should contain at least following directives:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_pass http://appserver:5000;
And, this is the real solution here, GUnicorn must be started with the --forwarded-allow-ips parameter.
Following is how I start it in production, fixing also the real IP address in logs (beware to complain to the GDPR :P ):
PYTHONUNBUFFERED=FALSE gunicorn \
--access-logfile '-' \
--access-logformat '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%({X-Real-IP}i)s"' \
-b :5000 \
--forwarded-allow-ips="*" \
app:app
You should NEVER send a request in HTTP. The first and only redirect should be the /.
I have a very annoying problem, i use nginx to proxy a apache server(http://internalip.com:18080), the config is like this:
location /svn {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://internalip.com:18080;
}
It is ok most-timely, but sometimes nginx just redirect user to internal address, so the user will be prompt error.
I don't know what's wrong, it just is being happening.
The nginx version is 1.4.4-4~precise0.
Could anybody know this?
Thanks in advance!
I have found out the problem. The key point is the Apache DirectorySlash, if I visit https://outipaddress.com/theurl, apache will redirect to http://internalip.com:18080/theurl/ even if the X-Forword-* headers are set. I think it is a bug of apache httpd.
The workaround is to perform the redirect on nginx side.
`location /svn/ {
if ($request_uri ~ "/[a-zA-Z0-9-_]+$") {
rewrite ^ https://$server_name$request_uri/;
}
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://internalip.com:18080;
}`
Now nginx will redirect all urls that are not ended with slash and seem like a directory(contain only symbol characters).