Serving files with PocketBase - rest

What I want is to restrict access to files for unauthorized user.
PocketBase documentation says I can retrieve the file URL and access files through it. The example URL for a file would be like this:
http://127.0.0.1:8090/api/files/example/kfzjt5oy8r34hvn/test_52iWbGinWd.png
I can prevent unauthorized users to get this URL, but authorized users can share URL with other one.
Any ideas?

I found a good way to secure files with nginx, by adding an extra location for my PocketBase server block and using an extra backend with one endpoint.
So, my nginx looks like this:
server {
listen 80;
server_name example.com;
location /api/files {
proxy_intercept_errors on;
error_page 404 = #fallback;
proxy_pass http://127.0.0.1:5000;
}
location / {
proxy_pass http://127.0.0.1:8090;
}
location #fallback {
proxy_pass http://127.0.0.1:8090;
}
}
Where my expressjs backend working on port :5000 checks JWT and responds with 404 if it is valid. Nginx will redirect to :8090 (PocketBase) if 404 returned on :5000.

Related

401 error on API over HTTPS, but not over HTTP

I'm trying to link with my mongoDB database, using axios, Vue, and my server is running NGINX.
In my App.vue file, where the axios requests are, I have this:
export default {
name: 'App',
data(){
return {
baseApiURL: '[HTTP or HTTPS]://example.com:4000/api',
This works over HTTP. When I change it to HTTPS, it doesn't. I have tried using the IP address, and using the domain address. Network tab on Chrome says:
Request URL: https://www.example.com/api/
Request Method: GET
Status Code: 401 Unauthorized
I don't understand what this means exactly.
My NGINX config:
server {
listen 80 default_server;
ssl on;
listen 443;
server_name example.com;
ssl_certificate /usr/src/app/ssl/domain.cert.pem;
ssl_certificate_key /usr/src/app/ssl/private.key.pem;
# vue app & front-end files
location / {
root /usr/src/app/dist;
try_files $uri /index.html;
}
# node api reverse proxy
location /api/ {
proxy_pass http://localhost:4000/;
}
}
I'm not sure if there's anything else I should include in this info, please let me know. I feel like it should be a small issue as it's working over HTTP.
Not really an "answer" as it doesn't solve the exact initial problem, but I solved my problem of my site not working over HTTPS by using Caddy Server instead of NGINX, which somehow does it automatically.

Don't serve static files if backend is offline

I have the following nginx config that handles serving my static website and redirecting requests to my REST backend:
server {
listen 80 default_server;
server_name _;
# Host static content directly
location / {
root /var/www/html;
index index.html;
try_files $uri $uri/ =404;
}
# Forward api requests to REST server
location /api {
proxy_pass http://127.0.0.1:8080;
}
}
If my REST backend goes offline the proxy module returns an HTTP status of "502 Bad Gateway" and I can redirect requests to a status page by adding the following:
# Rewrite "502 Bad Gateway" to "503 Service unavailable"
error_page 502 =503 #status_offline;
# Show offline status page whenever 503 status is returned
error_page 503 #status_offline;
location #status_offline {
root /var/www/html;
rewrite ^(.*)$ /status_offline.html break;
}
However, this will only work for requests that access the REST backend directly. How can I redirect requests to my static website in the same way whenever the backend is offline?
Nginx does have some health check and status monitoring capabilities that seem like they could be related, but I couldn't find a proper way to use them.
While its intended use case is actually for authorization, I found nginx's auth_request module to work for me:
# Host static content directly
location / {
# Check if REST server is online before serving site
auth_request /api/status; # Continues when 2xx HTTP status is returned
# If not, redirect to offline status page
error_page 500 =503 #status_offline;
root /var/www/html;
index index.html;
try_files $uri $uri/ =404;
}
It will call /api/status as a subrequest before serving the static content and will only continue when the subrequest returns an HTTP status in the 200 range. It seems to return status 500 when the server is offline.
This method might have some performance implications since you're now always doing an extra request, but that seems to be an inherent requirement of checking whether your service is online.
I think this is the correct answer - auth request is ideal for any situation where you want to "ping" a backend before returning the requested content.
I have used a similar scheme in the past for an nginx server where I wanted to check if an auth header was correct before proxying to an S3 bucket.

Redirect only one subfolder of an extenral domain to my localhost with nginx. And rest of the traffic back to the original domain

Let's say there is an existing domain dummy with subdomain their:
http://their.dummy.com
There is an URL (and associated sub-URLs) of that subdomain.domain that I would like to reroute on my localhost.
http://their.dummy.com/store
http://their.dummy.com/store/<whatever>...
But all the rest of the URLs, back to the original subdomain.domain
I created an entry in /etc/hosts:
127.0.0.1 their.dummy.com
And, naively, created a server in nginx:
server {
listen 80;
server_name their.dummy.com;
server_name_in_redirect off;
access_log /var/log/nginx/access-their.dummy.com.log;
error_log /var/log/nginx/error-their.dummy.com.log;
location /store {
alias /opt/my/store;
autoindex on;
}
location / {
proxy_set_header Host their.dummy.com;
proxy_redirect http://their.dummy.com/;
}
}
This works fine when I want to access: http://their.dummy.com/store
But the other URLs are not redirected to the original domain.
How could I achieve this?

Nginx config for single page app with HTML5 App Cache

I'm trying to build a single page app that utilizes HTML5 App Cache, which will cache a whole new version of the app for every distinct URL, thus I must redirect everyone to / and have my app route them afterward (this is the solution used on devdocs.io).
Here's my nginx config. I want all requests to send a file if it exists, redirect to my API at /auth and /api, and redirect all other requests to index.html. Why is the following configuration causing my browser to say that there is a redirect loop? If the user hits location block #2 and his route doesn't match a static file, he's sent to location block #3, which will redirect him to "/" which should hit location block #1 and serve index.html, correct? What is causing the redirect loop here? Is there a better way to accomplish this?
root /files/whatever/public;
index index.html;
# If the location is exactly "/", send index.html.
location = / {
try_files $uri /index.html;
}
location / {
try_files $uri #redirectToIndex;
}
# Set the cookie of the initialPath and redirect to "/".
location #redirectToIndex {
add_header Set-Cookie "initialPath=$request_uri; path=/";
return 302 $scheme://$host/;
}
# Proxy requests to "/auth" and "/api" to the server.
location ~* (^\/auth)|(^\/api) {
proxy_pass http://application_upstream;
proxy_redirect off;
}
That loop message suggests that /files/whatever/public/index.html doesn't exist, so the try_files in location / doesn't find $uri when it's equal to /index.html, so the try_files always internally redirects those requests to the # location which does the external redirect.
Unless you have a more complicated setup than you've outlined, I don't think you need to do so much. You shouldn't need external redirects (or even internal redirects) or server-side cookie sending for a one-file js app. The regex match for app and api wasn't quite right, either.
root /files/whatever/public;
index index.html;
location / {
try_files $uri /index.html =404;
}
# Proxy requests to "/auth" and "/api" to the server.
location ~ ^/(auth|api) {
proxy_pass http://application_upstream;
proxy_redirect off;
}

Redirect nginx config server_name to custom 404 error page

I'm new to nginx configs and have spent a lot of time googling so far. I'm trying to create a very basic nginx config file to be used in a "redirect" server.
Users will be required to point naked domains (example.com) by A-record to my redirect server IP address, and the 'www' record by CNAME to another server.
The purpose of the redirect server is to then perform a 301 redirect any/wildcard naked domains back to to the 'www' version of the domain so it can be properly handled by my other server.
But I also want to catch any misconfigured 'www' domains that are pointing to my server IP by A-record, and simply direct them to a custom error page on the redirect server with further instructions on how to set up their account correctly for my service.
Here's what I have. It works, but since I am new to writing configs I was wondering if there is a better way to handle the redirect to the custom error page in the first server block. TIA!
#redirect to error page if begins with 'www.'
server {
listen 80;
server_name ~^www.; #only matches if starts with 'www.'. Is this good enough?
rewrite ^(.*)$ /404.html; #is this the correct way to direct to a custom error page?
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
}
#no match, so redirect to www.example.com
server {
listen 80 default_server;
rewrite ^(.*)$ $scheme://www.$host$1 permanent;
}
Prefix/suffix server name matching is faster and easier than regexp.
Also, there is no reason to use rewrite. You want to return 404, so do it and nginx will do all the rest. BTW, with rewrite you will return 200 OK with content of /404.html instead of 404 Not Found.
So here it is:
server {
listen 80;
server_name www.*;
root /usr/share/nginx/html;
error_page 404 /404.html;
location / {
return 404;
}
location = /404.html {
internal;
}
}