Should I secure my MongoDB Database? - mongodb

I am setting up two computers to run a web application. web-host hosts a MongoDB database and NodeJS web server, while worker runs some more demanding processes and populates the database. Using an SSH tunnel from worker, web-host:27017 is accessible using localhost:9999 from worker. web-host:80 has been set up to be accessible on http://our.corporate.site/my_site/.
At the moment MongoDB has no authentication on it - anything that can contact web-host:27017 can read or write anything to the database.
With this setup, how paranoid should I be about authenticating requests to MongoDB? The answers to this question seemed to suggest not very. Considering access is only possible from localhost it seems about as secure as the local file system. In MySQL I usually have a special 'web' user with limited privileges to limit the damage of an injection attack in case I make a mistake sanitizing input, however MongoDB seems less vulnerable to injection (or at least easier to sanitize) compared with MySQL.

Here's the issue: If you do set-up Mongo authentication, you are going to need to store the keys on the machine that accesses it.
So assuming that web-host:80 is compromised, the keys are also vulnerable.
There are some mitigation processes you can use to secure your environment, but there is no silver bullet if an attacker gains root access to your environment.
First I would consider putting mongodb on a separate machine on a private internal network that can only be accessed by machines in a DMZ (the part of the network where machines can communicate with your internal network and the outside world).
Next, assuming you are running a Linux-based system, you should be able to use AppArmor or SELinux to limit which processes are allowed to make outbound network requests. In this case only your webapp process should be able to initiate network requests such as connecting to your Mongo database.
If an attacker was able to get non-root access on your machine, the SELinux/AppArmor system policy would prevent them from initiating a connection to your database from their own script.
Using this architecture, you should be more secure than simply augmenting your current architecture with authentication. In a choice between the SELinux/AppArmor, I would use SELinux, since it is was much more mature and had much more granular control the last time I checked.

Related

Syncing and Mirroring data between 2 servers automatically cPanel

I have two servers and both are working fine.
How to sync all my data from one server to another server/backup-storage/remote-storage.
I want to know if one of my server is down due to heavy load then how to use instantly second server and what is the role of DNS in this, because if we use another server then we have to change DNS also for particular website so how to overcome this.
You can check cloudflare load balancer.
Architecturally you have two problems to solve:
load balancing (how clients are routed to one of the servers) - this involves sometimes DNS settings but because cloudflare hosts your DNS as well, you are cool
Synchronization: files and database sync between hosting accounts. Now here there is no standard way to go especially because your are hosted using cpanel
DATABASE:
You can't use master-master or master-slabe database replication mechanisms like Galera Cluster has.
You're best bet is to have a cron that will export the database from one server to the other. (using mysqldump - basically exporting and then importing)
on live:
mysqldump -u userName -p yourLiveDatabaseName > live_database_export.sql
on the hot backup (your other account):
mysql -u username -p yourOtherServerDatabaseName < live_database_export.sql
FILES:
If you have SSH access use rsync.
Otherwise you may need to invent something.
For instance you can check the Cpanel API in regards to account transfers -> that will solve the database as well https://api.docs.cpanel.net/openapi/whm/operation/create_remote_user_transfer_session/
As a remark - you are not in the best position to do HA having two cPanel shared accounts. What I usually do is to use virtual machines that are sync at the hypervisor level.

What it means by "Authentication partially enabled" on MongoDB?

I ran a scan on Shodan for my server IP and I noticed it listed my MongoDB with "Authentication partially enabled"
Now, I can't find what it actually mean. I am sure I set up authentication right way but the word "partially" concern me.
It means you have a mongodb database with enabled authentication.
I guess Shodan uses this fancy wording to highlight that the database is still listening on the externally facing interface, i.e. you can connect to the database with command
mongo <your IP>
from anywhere.
There are commands that don't require authentication, e.g.
db.isMaster()
db.runcommand({buildInfo:1})
db.auth()
....
It leaves room for exploitation of vulnerabilities, brute force attack, etc.
The server responds to the connection request which exposes the fact that you are using mongo. Version of your server, ssl libraries, compilation options and other information advertised by the server can be used to search known or 0day vulnerabilities.
You can see what info is exposed on shodan https://www.shodan.io/search?query=mongodb+server+information. Compare it with amount of information available for hosts without "Authentication partially enabled"
The most popular ways to harden mongodb set up is to make it accessible from local network/VPC/VPN only. If nature of your business requires bare mongo accessible from the internet, hide it behind a firewall to allow connections only from known IPs. In both cases you will be completely invisible to Shodan and similar services.

Does CouchDB need to be hosted along with my website, or am I going to work with it as a local server on my network or computer?

I'm learning the concepts of a NoSQL database, especially CouchDB. But I have a doubt that may sound stupid, but I have not found answers on the internet. Where does CouchDB work? On a regular web hosting service or on my local network? Ex: My computer and my localhost.
CouchDB can be installed as single-node on any computer, including your local machine. CouchDB may also be used in clustered mode.
HTTP is used to write data (documents) and request information from the database. Therefore, the database may be hosted along with your website but it doesn't have to. All depends on your use case. The only important thing is that your web application knows about the host name, port number and credentials allowing it to access CouchDB over HTTP.

Can create a remote server with MongoDB? How?

My question, to be more clear, it is to create a server with mongodb on a cloud hosting (for example) and access it through another server.
Example:
I have a mobile app.
I hosted my mongoDB a cloud hosting (ubuntu).
I want to connect my app to the db on the server cloud.
Is it possible? How?
I'm joining this learning and my question was exactly MongoDB to create a server in a way that I could access it remotely.
Out of "localhost"? Different from all the tutorials I've seen.
From what you are describing, I think you want to implement a 2-Tier-Architecture. For practically all use cases, don't do it!
It's definitely possible, yes. You can open up the MongoDB port in your firewall. Let's say your computer has a fixed IP or a fixed name like mymongo.example.com. You can then connect to mongodb://mymongo.example.com:27017 (if you use the default port). But beware:
Security You need to make sure that clients can only perform those operations that you want to allow, e.g. using MongoDB integrated authentication, otherwise some random script kiddie will steal you database, delete it, or fill it with random data. Many servers, even if they don't host a well-known service, get attacked thousands of times per day. Also, you probably want to encrypt the connection so people can't spy on the connection. And to make it all worse, you will have to store the database credentials in your client app, which is practically impossible to do in a truly secure way.
Software architecture There is a ton of arguments against this architecture, but 1) alone should be enough. You never want to couple your client to the database, be it because of data migrations, software updates, security considerations, etc.
3-Tier
So what to do instead? Use a 3-Tier-Architecture: Host a server of some kind on mymongo.example.com that then connects to the database. That server could be implemented in nginx/node.js, iis/asp.net, apache/php, or whatever. It could even be a plain old C application (like many game servers).
The mongodb can still reside on yet a different machine, but when you use a server, the database credentials are only known to the server, not to all the clients.
Yes, it is possible. You would connect to MongoDB using the ip address of your host, or preferably using it's fully qualified hostname rather than "localhost". If you do that, you should secure your MongoDB installation otherwise anyone would be able to connect to your MongoDB instance. At an absolute minimum, enable MongoDB authentication. You should read up on MongoDB Security.
For a mobile application, you would probably have some sort of application server in front of MongoDB, e.g. your mobile application would not be connecting to MongoDB directly. In that case only your application server would be connecting to MongoDB, and you would secure MongoDB accordingly.

EC2: can I host an http server there?

Does anyone have experience deploying GWT apps to EC2?
If I were to install tomcat or apache on a ec2 instance, could I have users connect directly to a url pointing there?
Would that be cost effective, or would java hosting services be best?
Is there any downside to hosting the edge HTTP server on a regular hosting service and have that direct requests to EC2? Performance ever an issue here?
Other answers are correct but I just wanted to share the fact that we are are developing a product that is 100% EC2/S3 based and also have a pure GWT front end.
We use maven2 for builds and the excellent gwt-maven plugin. This makes it easy to produce a WAR package of our web application as output. We use Jetty but Tomcat would work just as well.
We have pound (a http accelerator/load balancer) running on the VM listening for http & https, which then forwards to requests to lighttpd (static) or jetty (app). This also simplifies SSL certificates because pound handles SSL. I've found Java servers have always been a pain to configure with SSL certs.
Yes, you can host pretty much whatever you want, as you effectively have a dedicated Linux machine at your command.
As I last recall, the basic rate for an EC2 instance, on their "low end box" worked out to around $75/month, so you can use that as a benchmark against other vendors. That also assumed that the machine is up 24x7 (since you pay for it by the hour).
The major downside of an EC2 instance is simply that it can "go away" at any time, and when it does, any data written to your instance will "go away" as well.
That means you need to set it up so that you can readily restart the server, but also you need to offline any data that you generate and wish to keep (either to one of Amazons other services, like S3, or to some other external service). That will incur some extra costs depending on volume.
Finally, you will also be billed for any traffic to the service.
The thing to compare it against is another "Virtual Server" from some other vendor. There is a lot of interesting things that can be done with EC2, but it may well be easier to go with a dedicated Virtual hosting service if you're just using a single machine.
Others have given good answers. I would have to add that you need to spend programmer time getting to know EC2's quirks and addressing them (e.g. with EBS). It's not completely trivial, and though it is useful knowledge to have and may be worth it for that reason alone, if you want to get up and running quickly with just a few servers, you should probably look at other hosted options.
On the other hand, if you plan to scale up massively enough (eventually hosting many servers on EC2) then I would highly recommend it. You have to architect a few things, but you need to do that anyways. The flexibility of on-demand computing, and the generally low price, makes this a killer platform once you reach a certain scale of operation.
You definitely can host an http server in EC2, but you need to take into consideration the following:
As mentioned before the cost can be much higher than alternative hosting solutions
Your instance (the machine you've started in EC2) can go off unexpectedly. There is no guarantee from Amazon for 24x7 availability. This mean that the data you've stored in local storage will be lost and when you've start a new instance, it will get a new IP.
To successfully host a server in EC2, you therefore need to employ some other services from Amazon. You need Elastic IP, so that you can circumvent the new IP address problem. You can also use Elastic Block Storage. This is a service that will allow you to mount in your machine a disk, that will not go away when your instance is lost.