Postgresql requests proxied by HTTP server - postgresql

I am using a mobile application that connects directly to the database instance (Postgres), as such, I have to keep the ports open for traffic that is generated from the internet (4G, mobile app).
This mobile app (QFIELD, mobile version of QGIS) has a direct connection to the database, this is the reason why the database is reachable from the internet on a public ip but this is a critical issue for the security of the data and the requests that can be sent to the database.
I would like to proxy the requests so that the database is only available to local machines and not open for connections directly.
The mobile appp would send the request to an HTTP url which would send the request to the local ip and port, this way I would avoid to have the database exposed on the internet.
Ideally, I would like to go from this app (which uses a postgres connection string to connect to the server) to an HTTP server that routes the request locally, as such:
APP connects to https://myproxy/postgres
Request is proxied to a local server
Can I do this with Apache2? Any ideas?
At the moment I cannot write a middleware that proxies requests from the APP to the local postgres.

If your application is expecting to connect directly to a PostgreSQL database and you don't want to change that then you need to connect to something that "speaks" PostgreSQL's client protocol.
You can place a proxy such as pgbouncer or pgpool in front of it, but they aren't a guarantee of greater security just by themselves. This is the same problem as with any proxy - it is just forwarding requests and responses to your actual server so any vulnerability is still exposed.
What you can do is:
restrict the number of connections at the proxy point
restrict which users can connect non-locally to your PostgreSQL cluster
restrict where they can connect from to just your proxy
restrict those users permissions within the database(s)
That last point is particularly important - assume any user account your application can be used maliciously. Restrict the account to prevent mass updating or deleting of data. Also take special care to restrict access to other users' data.
If I was forced to allow access like this, I would want one PostgreSQL user account per actual user at the very least. In practice I wouldn't get to this point with a production application.

Related

Google Cloud SQL - PostgreSQL database connection from QGIS for third parties

I have a Google Cloud SQL PostgreSQL database in which I can connect by using SSL and by entering my IP address in allowed connection settings. However, I do not want to list all the IP addresses that is going to connect to this database (because I do not know all the IP addresses). I have around 15 people which I want them to login to my database using QGIS and they should be able to change the data as this is a research. Security is not a big issue as this database will be online for a very short period of time. What connection method can you suggest? The users are not very proficient so I need to setup everything.
I hope you're doing fine.
I would like to suggest to set the connections with the Cloud SQL proxy as it will provide the security needed without using ssl or the need of authorize any network. so basically the set up is to:
Enable the API
Install the proxy client on your local machine
Determine how you will authenticate the proxy
If required by your authentication method, create a service account
Also you can find the steps on "Connecting to Cloud SQL from external applications"
Hope this works for you as I have never used it with QGIS but I believe that as you are using a proxy it won't be hard from there to use it with QGIS as if you connected to a local server.

How do you configure a domain name for openfire server? Do I just buy a domain and set it as my XMPP domain?

so I am setting up a server for a messaging application which is being developed. I am using openfire server for this which I have installed and running on a PC. Right now, the xmpp domain is set to my computer name and server is working on my network, but obviously as its a local name it cannot be accessed from the outside.I am able to access the server from multiple computers on the same network using the Spark messaging client to test the server. So to be able to access my XMPP server from devices outside my network, do I just buy a domain name and set it as my XMPP domain in Openfire settings?
To answer your question, yes, with the following caveats:
You will either have to host the DNS server yourself or have the DNS provider serve the records for you.
A domain must have a static IP to address to point to. A home or a typical small business Internet account does not include a static IP (some providers actively prevent home accounts from serving web pages/services).
You must also configure your firewall to allow a mapping to the internal server.
I would recommend using an external provider to handle the network and hosting services for your program.

HTTPS for local IP address

I have a gadget[*] that connects to the user's WiFi network and responds to commands over a simple REST interface. The user uses a web app to control this gadget. The web app is currently served over http and the app's javascript does AJAX calls to the gadget's local IP address to control it. This scheme works well and I have no issues with it.
[*] By "gadget" I mean an actual, physical IoT device that the user buys and installs within their home, and configures to connect to their home WiFi network
Now, I want to serve this web app over https. I have no issue setting up https on the hosting side. The problem is, now the browser blocks access to the gadget (since the gadget's REST API is over http and not https).
The obvious solution is to have the gadget serve it's REST API over https. But how? It has a local IP address and no one will issue a certificate for it. (Even if they did, I'd have to buy a boatload of certificates for each possible local IP address.) I could round-trip via the cloud (by adding additional logic on my server side to accept commands from the web app and forward it to the gadget over another connection), but this will increase latencies.
Is there a way around this problem? One possibility that I have in mind is to:
Get a wildcard certificate (say, *.mydomain.com)
Run my own DNS that maps sub-domains to a local IP address following a pattern (For example, 192-168-1-123.mydomain.com would map to 192.168.1.123)
Use the wild-card certificate in all the gadgets
My web app could then make AJAX calls to https://192-168-1-123.mydomain.com instead of http://192.168.1.123 and latencies would remain unaffected aside from the initial DNS lookup
Would this work? It's an expensive experiment to try out (wildcard certificates cost ~$200) and running a DNS server seems like a lot of work. Plus I find myself under-qualified to think through the security implications.
Perhaps there's already a service out there that solves this problem?
While this is a pretty old question, it is still nothing that you find out-of-the-box solutions for today.
Just as #Jaffa-the-cake posted in a comment, you can lean on how Plex did it, which Filippo Valsorda explained in his blog:
https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/
This is very similar to what you proposed yourself. You don't even need a wildcard certificate, but you can generate certificates on-the-fly using Let's Encrypt. (You can still use wildcard certificates, if you want, which Let's Encrypt supports now, too.)
Just yesterday I did a manual proof-of-concept for that workflow, that can be automated with the following steps:
Write a Web Service that can create DNS entries for individual devices dynamically and generate matching certificates via Let's Encrypt - this is pretty easy using certbot and e.g. Google Cloud DNS. I guess Azure, AWS and others have similar offerings, too. When you use certbot's DNS plugins, you don't even need to have an actual web server running on port 80/443.
On you local device, contact that Web Service to generate a unique DNS entry (e.g. ..yourdns.com) and certificate for that domain
Use that certificate in your local HTTPS server
Browse to that domain instead of your local IP
Now you will have a HTTPS connection to your local server, using a local IP, but a publicly resolved DNS entry.
The downside is that this does not work offline from arbitrary clients. And you need to think of a good security concept to create trust between the client that requests a DNS and certificate, and your web service that will generate those.
BTW, do you mind sharing what kind of gadget it is that you are building?
If all you want is to access the device APIs through the web browser, A Simple solution would be to proxy all the requests to the device through your web server.this was even self signed certs for the devices wont be a problem. Only problem though is that the server would have to be on the same network as your devices.
If you are not on the same network, you can write a simple browser plugin (chrome) to send the api request to IoT device. but then the dependency on the app/plugin will be clumsy.

Is using Redis a violation of REST principles?

I am creating a webapp for data analysis. I want to use Redis to store the data that the user has uploaded so that I can send it to other pages/views. This data is only valid during the session and should expire when the session expires.
Is this a violation of REST principles? Or is this only a problem if I use some value that I have stored server side as session key/identifier?
With your updates what you can do is to upload the data, generate a key against it, place it in Redis and keep it in hash(with meta data) or list(if there could be more than one upload). They list/hash key could be identified by the user id.
Then moving forward, let the client refer to this object using the generated id.
Actually one of the best practices is to use Redis over the internet is to expose a REST API and handle all communication using your Web Server. Redis is always kept in a secure network since Redis doesn't provide any security.
On Redis website
Network security
Access to the Redis port should be denied to everybody but trusted
clients in the network, so the servers running Redis should be
directly accessible only by the computers implementing the application
using Redis.
In the common case of a single computer directly exposed to the
internet, such as a virtualized Linux instance (Linode, EC2, ...), the
Redis port should be firewalled to prevent access from the outside.
Clients will still be able to access Redis using the loopback
interface.
This is also a basic practice when using traditional databases.

Akka remote actors filter connections by IP

I'm trying to add security to my remote actors. I've set untrusted-mode:
http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
Is it possible to add IP filtering, to allow connection only from specific server? For example I have one master and 10 slaves, I want to allow only for my master (specific IP) to connect my slaves.
In open source everyone could just create a new instance of my master, and connect to my real slaves. How can make it secure?
Using IP filtering is not very secure as it's easy to fake an IP. Luckily Akka comes with secure transport support via SSL and secure cookie support.
A cookie is like an API key and will be required to establish the connection. SSL will guarantee eavesdropping is not possible to steal the secure cookie. See this doc for example.
I made a simple project that uses Akka remoting and SSL with secure cookie. Try it out here. Read how to setup SSL certificate storage and such here.