What's the purpose of running VCS over SSH? - version-control

I'm not very familiar with SSH and *nix systems in general, so please forgive me for possibly stupid question.
What are the benefit and what is the exact purpose behind having one's VCS be tunneled (hope this is an appropriate term here) over an SSH connection? Is it speed? Or security? Or something else?

Security and that SSH is a standard transport protocol. Also use of key authentication is common with SSH to provide password-less interaction with the VCS. Speed is not a benefit as SSH encrypts transmissions and so time is taken doing the encrypt/decrypt.
Why pick a standard transport protocol? Getting firewall clearance is more straight-forward, the VCS doesn't have to re-invent the wheel, etc.

This is a subjective answer, but here are three reasons that I would tunnel any application protocol over SSH, in order of importance:
Authentication and Authorization
I don't have to maintain my own database of users, don't have to think about password encryption, don't have to give the sysadmins yet another thing to manage.
Connection management
I can focus on my application-level communications, without worrying that I've created an exploitable security hole.
Admins are more likely to open well-known ports

Related

Is accepting all client certificates considered insecure for a public OPC UA server?

I am aware of certificate chains when validating a client certificate. Still, this either puts a lot of burden on the server administrator or restricts clients, which can be unfavorable when implementing a public OPC UA server.
An implementation of the client certificate validator that accepts all certificates for message encryption/signing is certainly possible. But would such an implementation be considered insecure in that matter?
If yes, how?
Yes, it is considered insecure.
Aside from the (hopefully) obvious use case, where certificates ensure you know exactly what client applications are allowed to connect to the server, certificates are also the first line of defense against malicious clients and are part of a "defense in depth" strategy.
A malicious actor that can't establish a secure channel with the server doesn't have much to work with. A malicious actor that can establish a secure channel can, e.g., open many connections, create many sessions (without activating, potentially causing a DoS are you use resources), attempt to guess credentials, re-use default credentials that an application may ship with, etc...
Further... in the face of the recent CIS alert re: ICS/SCADA devices + OPC UA servers, you'd be a bit of a fool to willingly ship a less secure product for the sake of convenience.

Secure way to access DB on Raspberry pi outside home network

I have a postgres database installed on my raspberry pi that works fine locally within my home network. I would like to be able to access this from outside my home network. I've done some research and from what ive seen port forwarding on my router or using a service like localtunnel or ngrok seem like viable solutions.
However, my question is if these open up any security risks on my home network? If not, then great i can move forward with setting this up (i was leaning towards port forwarding on my router). But if there are concerns, what exactly are they and what steps can I take to have a secure setup?
If you expose your database to the world with a weak password for a database superuser, that will definitely lower your security in a substantial way. Hackers routinely patrol for such weak settings and exploit them, mostly for cryptocurrency mining but also to add you to botnets. In those cases they don't care about your database itself, it is just a way in to get at your CPU/network connection. They might also probe for valuable information appearing in your database, in which case they don't even need to be a superuser.
If you always run the latest bugfix version and use a strong password (like the output of pwgen 20 -sy -1) and use SSL or if you correctly use some other method of authentication and encryption, then it will lower security by only a minimal amount.
If you personally control every password, and ensure they are strong, and test that they are configured correctly to be required for log on (e.g. intentionally enter it wrong once to make sure you get rejected), I wouldn't worry too much the port forwarding providing bad guys access to the machine. If you care about people being able to eavesdrop on the data being sent back and forth, then you also need SSL.
Encrypted tunnels of course are another solution which I am not addressing.

If I have authorization enabled, why is it dangerous to open up MongoDB to all remote IPs?

MongoDB by default only listens to traffic coming in from 127.0.0.1 (and port 27017). I'm the only one who can send traffic to that IP address, so this prevents random internet people from messing with my database and stealing data. Cool. But then I enable authorization by creating an admin user:
mongo
use admin
db.createUser(
{
user: "ADMINUSERNAME",
pwd: "ADMINPASSWORD",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
exit
and setting security.authorization to "enabled" in /etc/mongod.conf.
Now, if I set net.bindIp in /etc/mongod.conf to 127.0.0.1,<serveripaddress>, and open up port 27017 with ufw allow 27017, what method could attackers use (other than brute force user/password stuff) to break in to my database?
Is the recommendation to have an IP white-list just an extra layer of security, or is there something that I'm missing here? Advantages that I can think of:
If an exploit is discovered in MongoDB, you still have an extra layer of defence.
If you have bugs in your code, or mess up something (e.g. accidentally add a user with a weak password), you've got that extra layer.
Safe from brute force user/password attacks - but assuming my password is 50 random ASCII characters long, this wouldn't be a problem, right?
Bad actors can't directly DDOS/flood the mongodb server directly - but this is easy to solve in other ways I think (fail2ban or something like that).
So points #1 and #2 seem to be the only real problems - and I can definitely see the dangers there, but am I missing anything else?
Note: I don't think this question is suited to the security stackexchange site because it's a fairly simple program-specific question.
Edit: I was originally saying "authentication" when I meant "authorization".
I thought long about wether to answer the question here or mark it as off-topic, but since “DevOps” seems to be ubiquitous these days, it might prevent serious damage when more easily accessible.
Disclaimer: there are books written about the general topic and a whole industry of engineers concerned with it. Here, only a brief overview and some hints can be given. Furthermore, some topics are heavily simplified. Do not rely solely on the information given here.
Assumption as per best practice: An attacker knows at least as much about your system (network, software, OS etc) as you do.
So, let us recap (some of) the general risks.
Unless you monitor failed login attempts and set up automatic action after a few failed attempts from a specific client (fail2ban being the simpelest example), one could brute force your accounts and passwords.
Furthermore, unless you use TLS, your connections are prone to eavesdropping. When successful, an attacker knows any credentials sent to the server.
Using exploits, for example of the ssh server, an attacker might hijack you machine. This risk increases for every exposed service. This attack is by far the most dangerous as your data will be compromised and the attacker might use your system to do Very Bad Things™ . In your name and in some jurisdictions even on your legal account.
Whitelisting IPs is surely helpful, but a determined attacker might get around this via IP spoofing. And you have to assume that the attacker knows valid source and destination IPs. Furthermore, when one of the whitelisted IPs is (partially) hijacked, an attack might well originate from there.
Now there are three general ways of dealing with those risks:
Risk mitigation, such as proper patch management and proper firewall setup.
Intrusion prevention, such as fail2ban
Intrusion detection
The latter deserves a bit of explanation. It is considered best practice to assume that sooner or later a successful attack is mounted against the system in question. It is imperative that a system's administrator can detect the intrusion in order to be able to take countermeasures. However, as soon as an attacker gets root privileges, all countermeasures can be undone. Therefor, it is imperative that an attacker can only acquire non-privileged access during the initial phases of an attack so you can detect his actions for privilege escalation. Conversely, this means that you have to make absolutely sure that the attacker can not acquire root privileges during the initial phases of an attack (which is one of the many reasons why no exposed service should ever run as root).
Assuming that you do not control the network, you have to resort to what is called host based intrusion detection or HIDS for short. HIDS in turn belong to two categories: behaviour based HIDS and state based HIDS. OSSEC belongs to the latter category and is (relatively) easy to set up and maintain while offering a broad range of features.
On a different level, exposing a database service to the public internet is almost always a sure sign that something is wrong with the system design. Unless you specifically want to provide database services, there is no reason I can think of for doing so.
If you only want to expose the data: write a REST-/SOAP-/whatever-API for it. If your database servers are only accessible via the public internet for some strange reason: Use VPN or stunnel.
Hint: depending on the application you create or the jurisdiction you live in, exposing the database service to the public internet might violate rules, regulations or even the law
tl;dr: if you have to ask, you probably should not do it.

Best practice for secureing an existing socket connection, without SSL

In Best practice for secure socket connection, the OP wants to secure the connection between two sockets, without SSL.
Thomas Pornin suggests SSH is the answer.
Is this answer based on SSH port forwarding of existing sockets, or just switching to SSH in general?
If not, and the question was how to make existing sockets more secure without SSL, what is the best way to to do that?
If a client on port 10 connects to a server on port 20, how can the server restrict access so that only client on port 10 can connect? And that it really is the client on port 10 (not an imposter)? (Availability only for an authenticated client).
The answer there is any form of the SSH protocol, which is based on channels. You can use those channels to transmit fairly arbitrary information, including port-forwarded data or terminal sessions, or anything you can turn into a byte stream. That said, TLS is generally much easier to implement in code because the libraries are ubiquitous and designed to be used this way. SSH is easier to implement in scripts on Unix-like systems because it has a powerful command-line API.
In most cases, TLS is the better choice. Unless you have a very specialized problem, TLS is almost always the better choice. So the question here is, what problem do you have that TLS doesn't work for? If it's "I hate TLS" then sure, SSH. But TLS is better in most cases.
TLS authenticates using client certificates. SSH authenticates using your private key. In either case, the cert/key is stored in a file that the client reads and uses to authenticate to the server.
It's not clear from your question what you mean by "client" or "imposter" here. Anything that has access to the cert/key will be authorized (possibly requiring a user-provided password), so those must be protected. If when you say "client" you mean "my application," that is not a solvable problem. You can authenticate people. You can to some extent authenticate machines (particularly if you have an HSM or similar piece of security hardware available). You can weakly authenticate that client is claiming to be on port 10, but this is generally useless and extremely fragile, so I wouldn't pursue it. You cannot authenticate software over the network in any meaningful way.
Short answer, though, is to use TLS unless you have a very specialized problem and a good security expert to help you design another solution (and your security expert will almost certainly say "use TLS").

What possibilities are there to connect to a postgres database

I was wondering: What possibilities are there to connect to a postgres database?
I know off top of my head that there are at least two possibilities.
The first possibility is a brute one: Open a port and let users anonymously make changes.
The second way is to create a website that communicates with postgres with use of SQL commands.
I couldn't find any more options on the internet so I was wondering if there are any. I'm curious if other options exist. Because maybe one of those options is the best solution to communicate with postgres via the internet.
This is more of a networking/security type question, I think.
You can have your database fully exposed to the internet which is generally a bad idea unless you are just screwing around for fun and don't mind it being completely hosed at some point. I assume this is what you mean by option 1 of your question.
You can have a firewall in front that only exposes it for certain incoming IP's. This is a little better, but still feels a little exposed for a database, especially if there is sensitive data on it.
If you have a limited number of folks that need to interact with the DB, you can have it completely firewalled, but allow SSH connections to the internal network (possibly the same server) and then port forward through the ssh tunnel. This is generally the best way if you need to give full DB access to folks that are external to the DB's network since SSH can be made much more secure than a direct DB connection by using a public/private keypair for each incoming connection. You can also only allow SSH from specific IP's through your firewall as an added level of security.
Similar to SSH, you could stand up a VPN and allow access to the LAN upon which the DB sits and control access through the VPN.
If you have a wider audience, you can allow no external access to the database (except for you or a DBA/Administrator type person through SSH tunneling or VPN). Then build access through a website where communication to the DB is done on the server side scripting (php, node.js, rails, .net, what-have-you). This is the usual website set up that every site with a database behind it uses. I assume that's what you mean in your option 2 of your question.