I have a postgres database installed on my raspberry pi that works fine locally within my home network. I would like to be able to access this from outside my home network. I've done some research and from what ive seen port forwarding on my router or using a service like localtunnel or ngrok seem like viable solutions.
However, my question is if these open up any security risks on my home network? If not, then great i can move forward with setting this up (i was leaning towards port forwarding on my router). But if there are concerns, what exactly are they and what steps can I take to have a secure setup?
If you expose your database to the world with a weak password for a database superuser, that will definitely lower your security in a substantial way. Hackers routinely patrol for such weak settings and exploit them, mostly for cryptocurrency mining but also to add you to botnets. In those cases they don't care about your database itself, it is just a way in to get at your CPU/network connection. They might also probe for valuable information appearing in your database, in which case they don't even need to be a superuser.
If you always run the latest bugfix version and use a strong password (like the output of pwgen 20 -sy -1) and use SSL or if you correctly use some other method of authentication and encryption, then it will lower security by only a minimal amount.
If you personally control every password, and ensure they are strong, and test that they are configured correctly to be required for log on (e.g. intentionally enter it wrong once to make sure you get rejected), I wouldn't worry too much the port forwarding providing bad guys access to the machine. If you care about people being able to eavesdrop on the data being sent back and forth, then you also need SSL.
Encrypted tunnels of course are another solution which I am not addressing.
Related
By far NONE answered my question, so i'm stuck here...
Basically i made a program (it's connected to my PostgreSQL database) which, depending on the user input, it will change tables contents in the database. It's a sort of Register/Login sistem. (click here if you want to see the script). When i run it on my pc (Windows 10 x64) it works like a charm. But when a friend of mine (Windows 10 x64) tries to run it (on a different network) it gives him this error:
Could not connect to server: Connection refused
Is the server running on host “192.168.1.113” and accepting TCP/IP connections on port 5432?
(if it can help you, i tried MySQL too, but i got the same result... my friend cannot access to my database!)
So, I was asking my self "Is it even possible allow other devices to access my database from other networks? If yes, how can I do it?"
192.168.*.* is for local network addresses. You would not expect it to be reachable from another network. You would have to figure out what your real address is. For example, by going to https://whatismyipaddress.com/ or just Googling "what is my IP address".
Then you have the question of how often that address changes (which is up to your ISP) and how to get the connection past your home router, which will probably either block it, or at least not route it to your database server, without special configuration to do port-forwarding. This is a basic networking task and at the moment is not specific to PostgreSQL.
Your ISP may also block the connection, as hosting servers on your standard home ISP plan is likely against the terms of service. Although most of them allow it if the traffic never comes to their attention due to high usage or due to abuse complaints.
I'd advise against that, we use databases to store metadata, text based data, data that you need to run queries upon, for storing images what services like instagram, pinterest do-- is they store images on services like s3 which are inexpensive, [because databases are extremely expensive]. Plust the amount of images generate per second, should you receive that amount of traffic would be astronomical.
What they do is they store the images on s3-- it's like your hard disk, but hosted on the internet, and then store the path in the database, and when someone asks for that image, we serve it from s3.
MongoDB by default only listens to traffic coming in from 127.0.0.1 (and port 27017). I'm the only one who can send traffic to that IP address, so this prevents random internet people from messing with my database and stealing data. Cool. But then I enable authorization by creating an admin user:
mongo
use admin
db.createUser(
{
user: "ADMINUSERNAME",
pwd: "ADMINPASSWORD",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
exit
and setting security.authorization to "enabled" in /etc/mongod.conf.
Now, if I set net.bindIp in /etc/mongod.conf to 127.0.0.1,<serveripaddress>, and open up port 27017 with ufw allow 27017, what method could attackers use (other than brute force user/password stuff) to break in to my database?
Is the recommendation to have an IP white-list just an extra layer of security, or is there something that I'm missing here? Advantages that I can think of:
If an exploit is discovered in MongoDB, you still have an extra layer of defence.
If you have bugs in your code, or mess up something (e.g. accidentally add a user with a weak password), you've got that extra layer.
Safe from brute force user/password attacks - but assuming my password is 50 random ASCII characters long, this wouldn't be a problem, right?
Bad actors can't directly DDOS/flood the mongodb server directly - but this is easy to solve in other ways I think (fail2ban or something like that).
So points #1 and #2 seem to be the only real problems - and I can definitely see the dangers there, but am I missing anything else?
Note: I don't think this question is suited to the security stackexchange site because it's a fairly simple program-specific question.
Edit: I was originally saying "authentication" when I meant "authorization".
I thought long about wether to answer the question here or mark it as off-topic, but since “DevOps” seems to be ubiquitous these days, it might prevent serious damage when more easily accessible.
Disclaimer: there are books written about the general topic and a whole industry of engineers concerned with it. Here, only a brief overview and some hints can be given. Furthermore, some topics are heavily simplified. Do not rely solely on the information given here.
Assumption as per best practice: An attacker knows at least as much about your system (network, software, OS etc) as you do.
So, let us recap (some of) the general risks.
Unless you monitor failed login attempts and set up automatic action after a few failed attempts from a specific client (fail2ban being the simpelest example), one could brute force your accounts and passwords.
Furthermore, unless you use TLS, your connections are prone to eavesdropping. When successful, an attacker knows any credentials sent to the server.
Using exploits, for example of the ssh server, an attacker might hijack you machine. This risk increases for every exposed service. This attack is by far the most dangerous as your data will be compromised and the attacker might use your system to do Very Bad Things™ . In your name and in some jurisdictions even on your legal account.
Whitelisting IPs is surely helpful, but a determined attacker might get around this via IP spoofing. And you have to assume that the attacker knows valid source and destination IPs. Furthermore, when one of the whitelisted IPs is (partially) hijacked, an attack might well originate from there.
Now there are three general ways of dealing with those risks:
Risk mitigation, such as proper patch management and proper firewall setup.
Intrusion prevention, such as fail2ban
Intrusion detection
The latter deserves a bit of explanation. It is considered best practice to assume that sooner or later a successful attack is mounted against the system in question. It is imperative that a system's administrator can detect the intrusion in order to be able to take countermeasures. However, as soon as an attacker gets root privileges, all countermeasures can be undone. Therefor, it is imperative that an attacker can only acquire non-privileged access during the initial phases of an attack so you can detect his actions for privilege escalation. Conversely, this means that you have to make absolutely sure that the attacker can not acquire root privileges during the initial phases of an attack (which is one of the many reasons why no exposed service should ever run as root).
Assuming that you do not control the network, you have to resort to what is called host based intrusion detection or HIDS for short. HIDS in turn belong to two categories: behaviour based HIDS and state based HIDS. OSSEC belongs to the latter category and is (relatively) easy to set up and maintain while offering a broad range of features.
On a different level, exposing a database service to the public internet is almost always a sure sign that something is wrong with the system design. Unless you specifically want to provide database services, there is no reason I can think of for doing so.
If you only want to expose the data: write a REST-/SOAP-/whatever-API for it. If your database servers are only accessible via the public internet for some strange reason: Use VPN or stunnel.
Hint: depending on the application you create or the jurisdiction you live in, exposing the database service to the public internet might violate rules, regulations or even the law
tl;dr: if you have to ask, you probably should not do it.
I was wondering: What possibilities are there to connect to a postgres database?
I know off top of my head that there are at least two possibilities.
The first possibility is a brute one: Open a port and let users anonymously make changes.
The second way is to create a website that communicates with postgres with use of SQL commands.
I couldn't find any more options on the internet so I was wondering if there are any. I'm curious if other options exist. Because maybe one of those options is the best solution to communicate with postgres via the internet.
This is more of a networking/security type question, I think.
You can have your database fully exposed to the internet which is generally a bad idea unless you are just screwing around for fun and don't mind it being completely hosed at some point. I assume this is what you mean by option 1 of your question.
You can have a firewall in front that only exposes it for certain incoming IP's. This is a little better, but still feels a little exposed for a database, especially if there is sensitive data on it.
If you have a limited number of folks that need to interact with the DB, you can have it completely firewalled, but allow SSH connections to the internal network (possibly the same server) and then port forward through the ssh tunnel. This is generally the best way if you need to give full DB access to folks that are external to the DB's network since SSH can be made much more secure than a direct DB connection by using a public/private keypair for each incoming connection. You can also only allow SSH from specific IP's through your firewall as an added level of security.
Similar to SSH, you could stand up a VPN and allow access to the LAN upon which the DB sits and control access through the VPN.
If you have a wider audience, you can allow no external access to the database (except for you or a DBA/Administrator type person through SSH tunneling or VPN). Then build access through a website where communication to the DB is done on the server side scripting (php, node.js, rails, .net, what-have-you). This is the usual website set up that every site with a database behind it uses. I assume that's what you mean in your option 2 of your question.
I'm working on a personal project. It's to recreate server software for the game "Chu Chu Rocket" for the Sega Dreamcast. Its' servers went down in 2004 I believe. My approach is to use dnsmasq to change the originl hostname that the game originally connected to, to my own system. With a DC-PC server set up, I have done just that, now instead of it looking up a non-existent dns record, it connects to my computer which will eventually run the server software. I've used tshark (cli wireshark) to capture what's going on between the client (dreamcast) and the server (my computer). The problem is, I'm getting data, but I'm not sure how to interpret it, I don't know what it's saying, but I'm sure it can be done because private PSO servers were created, those are far more complex.
Very simply, where would I go about learning how to interpret data packets, and possibly creating packets that will respond to such queries from the client?
Thanks,
Dragos240
If you can get the source code for the server software on your PC, then that is the best place to look.
Otherwise, all you can do is look at the protocol, compare runs, and make notes of similarities and differences. With any luck, the protocol won't be encrypted.
I need to turn on networking for MySQLd, but every time I do, the server gets brute-forced into oblivion. Some mean password guessing script starts hammering on the server, opening a connection on port 3306 and trying random passwords forever.
How can I stop this from happening?
For SSH, I use denyhosts, which works well. Is there a way to make denyhosts work with MySQLd?
I've also considered changing the port MySQL is running on, but this is less than ideal and only a stop-gap solution (what if they discover the new port?)
Does anyone have any other ideas?
If it makes a different, I'm running MySQL 5.x on FreeBSD 6.x.
Firewall mysql port out. But this belongs to the serverfault realm, I believe.
I've also considered changing the port MySQL is running on, but this is less than ideal and only a stop-gap solution (what if they discover the new port?)
The stupid bots are the ones that are constantly bashing themselves aginst your port and they don't look for new ports. Move to a different port and you now only have to worry about people who are trying to hack you, rather than the internet background noise of compromised machines scanning random hosts. This is a great improvement.
If you need to let only a few specific machines through to your database you could consider an SSH tunnel between local ports on the database and client machines. It's fairly rare you really want to open a database port to the public internet.
Limit the number of unsuccessful requests a single host can make.
I believe changing the port number from the default one (3306) to some other doesn't improve the security but helps in most cases (at least a bit). Have you tried that in practice or only considered?