Is it insecure to use a root user for mongodb? - mongodb

I have an app that connects to different databases on a mongodb instance. The different databases are for different clients. I want to know if my clients data will be compromised if I used a single user to login to the different databases. Also, is it a must for this user to be root? to readWrite role will do the trick. I'll be co connecting to the databases through a java backend.

There is no straightforward answer to this. It's about risk and cost-benefit.
If you use the same database user to connect to any database, then client data separation depends much more on business logic in your application. If any part of your code can just decide to connect to any client database, then a request from one client may (and according to my experience, eventually will) end up in a different client's database. Some factors make this more likely to happen, like for example if many people develop your app for a longer time, somebody will make a mistake.
A more secure option would be to have a central piece or component that is very rarely changed with changes strictly monitored, which for each client session (or even request) would take the credentials accroding to the client and use that to connect to the database. This way, any future mistake by a developer would be limited in scope, they would not be able to use the wrong database for example. And then we haven't mentioned non-deliberate application flaws, which would allow an attacker to do the same, and which are much more likely. If you have strong enforcement and separation in place, an malicious user from one client may not be able to access other clients data even in case of some application vulnerabilities, because the connection would be limited to the right database. (Note that even in this case, your application needs to have access to all client database credentials, so a full breach of your application or server would still mean all client data lost to the attacker. But not every successful attack ends in total compromise.)
Whether you do this or not should depend on risks. One question for you to answer is how much it would cost you if a cross-client data breach happened. If it's not a big deal, probably separation in business logic is ok. If it means going out of business, it is definitely not enough.
As for the user used for the connection should be root - no, definitely not. Following the principle of least privilege, you should use a user that only has rights to the things it needs to, ie. connecting to that database and nothing else.

Related

RESTfulness violation regards to the database

From my point of view, session violates RESTfulness either when it is stored in the memory or db.
In case of session stored in the memory, it is hard to scale up the server using like load-balancing since servers don't share the session data.
Likewise in case of session stored in database, the database will be over-loaded when many servers simultaneously make queries.
My issue is related with the second case.
For a long time, I had thought that the server and database are different.
My past-assumption)
When client make request to the server with specific data, then server stores that data in the database like mysql or mongo etc.
So server don't have to care about the client's state since database has all control over them.
Server can stand alone against the client's request since server can make query to the database whenever wants to know who the client is.
So my two questions are that,
Whenever mentioning that "RESTful server stand alone against the client's request", is that 'server' includes database?
If yes, and if there is a User model and a Post-model associated with one-to-many relation, isn't that also violates RESTfulness?
I am sure that the second question makes no sense, since if the second question's answer is true, then RESTapi would have never been that useful.
But I cannot understand the difference between session in the database violates RESTfulness and the User-Post does not violate Restfulness.
I think that both of them are following the same procedure, client-server-database.
How can I understand this issue easily?
What's generally meant with statelessness is, summed up that : all the information to execute the HTTP request is self-contained within the request.
Some implications are that:
I can disconnect the TCP socket and re-open it, or I can keep a TCP connection open and it makes no difference. All the information to execute the request is contained within the request.
In the case of idempotent methods, I can re-do the exact same request and end up with the same state as if I only did it once.
In other words,
There are more complete descriptions of statelessness and HTTP, but the important thing is that statelessness here does NOT mean that the server cannot have any state at all. Most REST services are probably useless if there is no state.
Now to the question of whether having a session violates REST principals. I think it's hard to objectively state this either way. The important parts in relation to your question is that to be RESTful you need a concept of resources, a concept of being able to address them and a concept of transferring state between client and server. (there are more things that make up a REST service, but here are a few relevant bits).
I don't think having a means of authentication prevents this, whether authentication is done via an Authorization header or a Cookie header is not really that relevant.
If the session cookies and associated session data starts interfering with this process in other ways, it might be possible for a session-related feature to violate REST principals, but I don't think this is generally true.
If there are 'many articles' saying that sessions violate REST, I don't think there is any real basis for this. There is a lot of garbage descriptions of REST going around. I do think it might be bad for other reasons to use cookies for authentication. It does create potential for security issues.
Restful server is separate from the database.
Restful server, if there is such a thing, is just a web server.
REST is just an architecture, say a methodology, that delivers information from server to client and vice versa over HTTP.

If I have authorization enabled, why is it dangerous to open up MongoDB to all remote IPs?

MongoDB by default only listens to traffic coming in from 127.0.0.1 (and port 27017). I'm the only one who can send traffic to that IP address, so this prevents random internet people from messing with my database and stealing data. Cool. But then I enable authorization by creating an admin user:
mongo
use admin
db.createUser(
{
user: "ADMINUSERNAME",
pwd: "ADMINPASSWORD",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
exit
and setting security.authorization to "enabled" in /etc/mongod.conf.
Now, if I set net.bindIp in /etc/mongod.conf to 127.0.0.1,<serveripaddress>, and open up port 27017 with ufw allow 27017, what method could attackers use (other than brute force user/password stuff) to break in to my database?
Is the recommendation to have an IP white-list just an extra layer of security, or is there something that I'm missing here? Advantages that I can think of:
If an exploit is discovered in MongoDB, you still have an extra layer of defence.
If you have bugs in your code, or mess up something (e.g. accidentally add a user with a weak password), you've got that extra layer.
Safe from brute force user/password attacks - but assuming my password is 50 random ASCII characters long, this wouldn't be a problem, right?
Bad actors can't directly DDOS/flood the mongodb server directly - but this is easy to solve in other ways I think (fail2ban or something like that).
So points #1 and #2 seem to be the only real problems - and I can definitely see the dangers there, but am I missing anything else?
Note: I don't think this question is suited to the security stackexchange site because it's a fairly simple program-specific question.
Edit: I was originally saying "authentication" when I meant "authorization".
I thought long about wether to answer the question here or mark it as off-topic, but since “DevOps” seems to be ubiquitous these days, it might prevent serious damage when more easily accessible.
Disclaimer: there are books written about the general topic and a whole industry of engineers concerned with it. Here, only a brief overview and some hints can be given. Furthermore, some topics are heavily simplified. Do not rely solely on the information given here.
Assumption as per best practice: An attacker knows at least as much about your system (network, software, OS etc) as you do.
So, let us recap (some of) the general risks.
Unless you monitor failed login attempts and set up automatic action after a few failed attempts from a specific client (fail2ban being the simpelest example), one could brute force your accounts and passwords.
Furthermore, unless you use TLS, your connections are prone to eavesdropping. When successful, an attacker knows any credentials sent to the server.
Using exploits, for example of the ssh server, an attacker might hijack you machine. This risk increases for every exposed service. This attack is by far the most dangerous as your data will be compromised and the attacker might use your system to do Very Bad Things™ . In your name and in some jurisdictions even on your legal account.
Whitelisting IPs is surely helpful, but a determined attacker might get around this via IP spoofing. And you have to assume that the attacker knows valid source and destination IPs. Furthermore, when one of the whitelisted IPs is (partially) hijacked, an attack might well originate from there.
Now there are three general ways of dealing with those risks:
Risk mitigation, such as proper patch management and proper firewall setup.
Intrusion prevention, such as fail2ban
Intrusion detection
The latter deserves a bit of explanation. It is considered best practice to assume that sooner or later a successful attack is mounted against the system in question. It is imperative that a system's administrator can detect the intrusion in order to be able to take countermeasures. However, as soon as an attacker gets root privileges, all countermeasures can be undone. Therefor, it is imperative that an attacker can only acquire non-privileged access during the initial phases of an attack so you can detect his actions for privilege escalation. Conversely, this means that you have to make absolutely sure that the attacker can not acquire root privileges during the initial phases of an attack (which is one of the many reasons why no exposed service should ever run as root).
Assuming that you do not control the network, you have to resort to what is called host based intrusion detection or HIDS for short. HIDS in turn belong to two categories: behaviour based HIDS and state based HIDS. OSSEC belongs to the latter category and is (relatively) easy to set up and maintain while offering a broad range of features.
On a different level, exposing a database service to the public internet is almost always a sure sign that something is wrong with the system design. Unless you specifically want to provide database services, there is no reason I can think of for doing so.
If you only want to expose the data: write a REST-/SOAP-/whatever-API for it. If your database servers are only accessible via the public internet for some strange reason: Use VPN or stunnel.
Hint: depending on the application you create or the jurisdiction you live in, exposing the database service to the public internet might violate rules, regulations or even the law
tl;dr: if you have to ask, you probably should not do it.

Revoke JWT session tokens with a blacklist. Should I create another system for the blacklist for performance?

I'm creating a web application (in C++, for performance) where I'm expecting to process a tremendous amount of events per second; like thousands.
I've been reading about invalidating JWT tokens in my web sessions, and the most reasonable solution for that is to have a storage place for blacklisted tokens. The list has to be checked for every request, and what I'm wondering is performance related: Should I create a separate system for storing my blacklisted tokens (like redis)? Or should I just use the same PostgreSQL database I'm using for everything else? What are the advantages of using another system?
The reason I'm asking is that I saw many discussions about invalidating JWT tokens online, and many suggest to use redis (and don't explain whether it's just a solution relevant to their design or whether it's a replacement to their SQL database server for some reason). Well, why not use the same database you're using for your web application? Is there a reason that makes redis better for this?
Redis is a lot faster since its stored on memory of the server rather than opening a connection to the DB, querying and returning the results. So if speed is of importance then Redis is what you would want.
The only negative is if the server restarts the blacklisted tokens are gone. Unless you save them on disk somewhere.

How to prevent connection to the server without my client application?

A client asked me to do a back-end server for its iPhone application and want only users who bought the application to be able to call the server.
The problem is that he doesn't want to add a login system to the application, so that it seems to me there is no completely safe way to prevent someone without his application calls the server.
In any case, even if it can not be completely prevented, it would be sufficient to make it difficult to access servers without the application.
What is the best way to achieve this? Again, I do not need to fully protect the connection, there is no transit of sensitive information, I just want to make things a little more complicated for people who want to take advantage of server without paying the application.
The idea that seems most simple is to encrypt the data with a key stored within the client and known to the server, so that the message can be decrypted only decompiling the code and finding the key (of course instead of a key you could put a list of keys, which change every 6/12/24 hours).
Could this be a reasonable solution?
This will never be possible. Welcome to the nature of the client-server architecture. You can never trust the client. Just make sure the functionality you are exposing is safe.
well if its a paid app you could release the app for free with all the functionally locked down until a user does a in app purchase and then you could verify the receipt on your server therefore proving that the device is a iOS Device?
sharing a key between the client and the server seems to be a good way to go. But instead of depending on the stored keys only, try combining them with a Unique identifier, such as UUID and send it to server both with the combined key, and the UUID itself.
At that point users UUID will be his identifier (username) and the combined key will be his token (password). And this will be a login-like mechanism.
An SSL connection is not enought to prevent other people from getting the URL for the requests? Or even better using an SSL connection with a basic auth?

Multitenancy using LDAP Integration

I need your suggestion for the following stuff of Multitenancy:
Actually I need to achieve multitenancy for my app. I've got it for using traditional flow with use of DB/Schema (eg. separate schema for each tenant).
Now I need to integrate user validation from LDAP and multitenancy stuff as well.
So What I am thinking is that If I store User info + DB/Schema info (DB connectivity info) in LDAP Server for more dynamic nature of the app. Because with this I would be able to connect any DB/Schema (irrespective of their physical location).
What's your opinion for this approach, Would it be really feasible?
If there is any cons in your mind, please share.
Thanks & Regards.
It sounds like you are trying to host multiple clients' systems on your system and each client may have multiple connections from multiple locations. From your question it sounds like you are putting one customer per database though databases may not be on the same cluster.
The first thing to do is to lock down PostgreSQL appropriately in the pg_hba.conf and expose only those database/user combos you want to expose. If you are going this route, LDAP sounds sane to me for publishing the connection info, since it means you can control who can access which accounts. Another option, possibly closely tied, would be to issue SSL certs and use certificate authentication by clients, so they can be issued a cert and use it to connect. You could even authenticate PostgreSQL against LDAP.
So there are a lot of options here for using these two together once you have things otherwise properly secured. Best of luck.