401 error on API over HTTPS, but not over HTTP - mongodb

I'm trying to link with my mongoDB database, using axios, Vue, and my server is running NGINX.
In my App.vue file, where the axios requests are, I have this:
export default {
name: 'App',
data(){
return {
baseApiURL: '[HTTP or HTTPS]://example.com:4000/api',
This works over HTTP. When I change it to HTTPS, it doesn't. I have tried using the IP address, and using the domain address. Network tab on Chrome says:
Request URL: https://www.example.com/api/
Request Method: GET
Status Code: 401 Unauthorized
I don't understand what this means exactly.
My NGINX config:
server {
listen 80 default_server;
ssl on;
listen 443;
server_name example.com;
ssl_certificate /usr/src/app/ssl/domain.cert.pem;
ssl_certificate_key /usr/src/app/ssl/private.key.pem;
# vue app & front-end files
location / {
root /usr/src/app/dist;
try_files $uri /index.html;
}
# node api reverse proxy
location /api/ {
proxy_pass http://localhost:4000/;
}
}
I'm not sure if there's anything else I should include in this info, please let me know. I feel like it should be a small issue as it's working over HTTP.

Not really an "answer" as it doesn't solve the exact initial problem, but I solved my problem of my site not working over HTTPS by using Caddy Server instead of NGINX, which somehow does it automatically.

Related

Serving files with PocketBase

What I want is to restrict access to files for unauthorized user.
PocketBase documentation says I can retrieve the file URL and access files through it. The example URL for a file would be like this:
http://127.0.0.1:8090/api/files/example/kfzjt5oy8r34hvn/test_52iWbGinWd.png
I can prevent unauthorized users to get this URL, but authorized users can share URL with other one.
Any ideas?
I found a good way to secure files with nginx, by adding an extra location for my PocketBase server block and using an extra backend with one endpoint.
So, my nginx looks like this:
server {
listen 80;
server_name example.com;
location /api/files {
proxy_intercept_errors on;
error_page 404 = #fallback;
proxy_pass http://127.0.0.1:5000;
}
location / {
proxy_pass http://127.0.0.1:8090;
}
location #fallback {
proxy_pass http://127.0.0.1:8090;
}
}
Where my expressjs backend working on port :5000 checks JWT and responds with 404 if it is valid. Nginx will redirect to :8090 (PocketBase) if 404 returned on :5000.

Don't serve static files if backend is offline

I have the following nginx config that handles serving my static website and redirecting requests to my REST backend:
server {
listen 80 default_server;
server_name _;
# Host static content directly
location / {
root /var/www/html;
index index.html;
try_files $uri $uri/ =404;
}
# Forward api requests to REST server
location /api {
proxy_pass http://127.0.0.1:8080;
}
}
If my REST backend goes offline the proxy module returns an HTTP status of "502 Bad Gateway" and I can redirect requests to a status page by adding the following:
# Rewrite "502 Bad Gateway" to "503 Service unavailable"
error_page 502 =503 #status_offline;
# Show offline status page whenever 503 status is returned
error_page 503 #status_offline;
location #status_offline {
root /var/www/html;
rewrite ^(.*)$ /status_offline.html break;
}
However, this will only work for requests that access the REST backend directly. How can I redirect requests to my static website in the same way whenever the backend is offline?
Nginx does have some health check and status monitoring capabilities that seem like they could be related, but I couldn't find a proper way to use them.
While its intended use case is actually for authorization, I found nginx's auth_request module to work for me:
# Host static content directly
location / {
# Check if REST server is online before serving site
auth_request /api/status; # Continues when 2xx HTTP status is returned
# If not, redirect to offline status page
error_page 500 =503 #status_offline;
root /var/www/html;
index index.html;
try_files $uri $uri/ =404;
}
It will call /api/status as a subrequest before serving the static content and will only continue when the subrequest returns an HTTP status in the 200 range. It seems to return status 500 when the server is offline.
This method might have some performance implications since you're now always doing an extra request, but that seems to be an inherent requirement of checking whether your service is online.
I think this is the correct answer - auth request is ideal for any situation where you want to "ping" a backend before returning the requested content.
I have used a similar scheme in the past for an nginx server where I wanted to check if an auth header was correct before proxying to an S3 bucket.

Why do I get reports of my Nginx redirect failing?

I've got a website sitting behind an Nginx proxy. I've set up Nginx to redirect all traffic from HTTP to HTTPS, like so:
server {
listen 80 default_server;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl default_server;
add_header Strict-Transport-Security "max-age=31536000";
location /api {
include uwsgi_params;
uwsgi_pass api-server:80;
}
location / {
root /web;
}
}
As far as I can tell, this should work. And when I hit my server from multiple locations using curl, I see the permanent redirect I was expecting. But I'm getting reports from some users that they're not getting redirected; instead they're seeing a generic Welcome to nginx! page.
Is there a better configuration I should be using? How can I debug this?
Create separate log files for the http and the https server and see if there are other status code than 301 in the one from the http server.
https://www.nginx.com/resources/admin-guide/logging-and-monitoring/

NGINX redirect old https domain to new non-https

Yesterday I have changed my domain name hat was foobar.tk and it was running over https. For now, on my new domain foobar.eu I does not have ssl.
I have succeed with redireting using CNAME records while I am not using https, but somehow I cannot redirect https://www.example.tk to http://www.example.eu Chrome says that connection was resset. Firefox says that content cannot be validated,...
For redirection I am using these lines:
server {
listen 443; (note: i have tried with *:443, *:443 ssl, 443 ssl)
server_name www.example.tk; (i have tried with orwithout www.)
return 301 http://www.example.eu$request_uri; (i have tried to redir to $host also where then cname will handle the issue)
}
What works:
http://www.example.tk -> http://www.example.eu using CNAME (and all other subdomains)
What is not working:
https://www.example.tk -> http://www.example.eu
I still can certificates backed-up, so if it can help in some way please tell me.
Thank you
When setting up SSL on Nginx you should use ssl_certificate and ssl_certificate_key directives.
server {
listen 443 ssl;
server_name www.example.tk;
ssl_certificate /path/to/certificate; #.crt, .cert, .cer, or .pem file
ssl_certificate_key /path/to/private/key;
return 301 http://www.example.eu$request_uri;
}
These two files you can get from your Certificate Authority.
Also you should add ssl parameter to listen directive.

Nginx is redirecting to www even though I didn't tell it to

I have a node app that is running on port 8989 and it is being port-proxied to 80.
server {
listen 80;
server_name example.com www.example.com;
access_log /var/log/nginx/example.access.log;
location / {
proxy_pass http://127.0.0.1:8989/;
}
}
That works beautifully. But for some reason, the web address automatically goes to www when I type http://example.com into the browser bar. I didn't tell it to do that! haha
I checked the domain settings in my registrar to make sure I didn't stupidly set a www redirect over there. Nothing.
Finally, I looked at the console logs of requests to http://example.com and the response is a 302 moved temporarily. Not sure how that happened, or why.
Where else can I look?
Try rewriting the server name for permanent
server {
server_name www.domain.com;
rewrite ^(.*) http://domain.com$1 permanent;
}
server {
server_name domain.com;
#The rest of your configuration goes here#
}
I would suggest that your 8989 service is issuing the 302 redirect, which is then being relayed by nginx. You should be looking at your 8989 service configuration to determine why it thinks it lives at www.example.com.