What it means by "Authentication partially enabled" on MongoDB? - mongodb

I ran a scan on Shodan for my server IP and I noticed it listed my MongoDB with "Authentication partially enabled"
Now, I can't find what it actually mean. I am sure I set up authentication right way but the word "partially" concern me.

It means you have a mongodb database with enabled authentication.
I guess Shodan uses this fancy wording to highlight that the database is still listening on the externally facing interface, i.e. you can connect to the database with command
mongo <your IP>
from anywhere.
There are commands that don't require authentication, e.g.
db.isMaster()
db.runcommand({buildInfo:1})
db.auth()
....
It leaves room for exploitation of vulnerabilities, brute force attack, etc.
The server responds to the connection request which exposes the fact that you are using mongo. Version of your server, ssl libraries, compilation options and other information advertised by the server can be used to search known or 0day vulnerabilities.
You can see what info is exposed on shodan https://www.shodan.io/search?query=mongodb+server+information. Compare it with amount of information available for hosts without "Authentication partially enabled"
The most popular ways to harden mongodb set up is to make it accessible from local network/VPC/VPN only. If nature of your business requires bare mongo accessible from the internet, hide it behind a firewall to allow connections only from known IPs. In both cases you will be completely invisible to Shodan and similar services.

Related

How to Confirm PostgreSQL on Ubuntu VM is communicating with External Server for Updates

I have an Ubuntu VM installed on a client's VMware system. Recently, the client's IT informed us that his firewall has been detecting consistent potential port scans to our VM's internal IP address (coming from 87.238.57.227). He asked if this was part of a known package update process on our VM.
He sent us a firewall output where we can see several instances of the port scan, but there are also instances of our Ubuntu VM trying to communicate back to the external server on port 37258 (this is dropped by the firewall).
Based on a google lookup, the hostname of the external IP address is "feris.postgresql.org", with the ASN pointing to a European company called Redpill-Linpro. As far as I can tell, they offer IT consulting services, specializing in open source software (like PostgreSQL, which is installed on our VM). I have never heard of them before though and have no idea why our VM would be communicating with them or vice-versa. I'm also not sure if I'm interpreting the IP lookup information correctly: https://ipinfo.io/87.238.57.227
I'm looking for a way to confirm or disprove that this is just our VM pinging for a standard postgres update. If that's the case I'd like to restrict this behaviour. We would prefer to do these types of updates manually and limit the communication outside of the VM to what is strictly necessary for the functionality of our application.
Update
I sent an email to Redpill's abuse account. They responded quickly saying that the server should not be port scanning anyone and if it appears that way, something is wrong.
The server is part of a cluster of machines that serves apt.postgresql.org among other postgres download sites. I don't think we have anything like ansible or puppet installed that would automatically check for updates but I will look into that to make sure. I'm wondering if Ubuntu reaching out to update the MOTD with the number of available packages would explain why our VM is trying to reach out to the external postgres server?
The abuse rep said in any case there should only be outgoing connections from the VM, not incoming. He asked for some additional info so I will keep communicating with him and try to update this post accordingly
My communication with the client's IT dropped off so I did not get a definitive answer on this, but I'll provide some new details:
I reached out to the abuse email for Redpill-Linpro. He got back to me and confirmed the server corresponding to the detected IP address is part of a cluster that hosts postgres download sites, including apt.postgresql.org. He was surprised to learn we had detected a port scan from their server and seems eager to figure out why that is happening.
He asked if the client IT could pass along some necessary info for them to set up tracking on that server. But the client IT never got back to me. I think he was satisfied that it wasn't malicious and stopped pursuing it.
Here's one of the messages the abuse rep sent me that may be relevant:
That does look a lot like the tcp to the apt download server yes. It's
strange that your firewall reports that many incoming connections, but
they could be fallout from some connection tracking that's not
operating as intended. The timing appears to be matching up more or
less perfectly. And there should definitely not be any ping-back
connections from it.
Since you appear to be using the http version of the server (and not https) bringing the data in cleartext, they should be able to just
dump the TCP connection contents and verify exactly what it does. But
I bet they are going to see a number of http requests initiated by the
apt client that is checking for updates.

Should I secure my MongoDB Database?

I am setting up two computers to run a web application. web-host hosts a MongoDB database and NodeJS web server, while worker runs some more demanding processes and populates the database. Using an SSH tunnel from worker, web-host:27017 is accessible using localhost:9999 from worker. web-host:80 has been set up to be accessible on http://our.corporate.site/my_site/.
At the moment MongoDB has no authentication on it - anything that can contact web-host:27017 can read or write anything to the database.
With this setup, how paranoid should I be about authenticating requests to MongoDB? The answers to this question seemed to suggest not very. Considering access is only possible from localhost it seems about as secure as the local file system. In MySQL I usually have a special 'web' user with limited privileges to limit the damage of an injection attack in case I make a mistake sanitizing input, however MongoDB seems less vulnerable to injection (or at least easier to sanitize) compared with MySQL.
Here's the issue: If you do set-up Mongo authentication, you are going to need to store the keys on the machine that accesses it.
So assuming that web-host:80 is compromised, the keys are also vulnerable.
There are some mitigation processes you can use to secure your environment, but there is no silver bullet if an attacker gains root access to your environment.
First I would consider putting mongodb on a separate machine on a private internal network that can only be accessed by machines in a DMZ (the part of the network where machines can communicate with your internal network and the outside world).
Next, assuming you are running a Linux-based system, you should be able to use AppArmor or SELinux to limit which processes are allowed to make outbound network requests. In this case only your webapp process should be able to initiate network requests such as connecting to your Mongo database.
If an attacker was able to get non-root access on your machine, the SELinux/AppArmor system policy would prevent them from initiating a connection to your database from their own script.
Using this architecture, you should be more secure than simply augmenting your current architecture with authentication. In a choice between the SELinux/AppArmor, I would use SELinux, since it is was much more mature and had much more granular control the last time I checked.

How can one connect from Heroku to a firewalled host to get data from MongoDb?

I am currently developing a service application that pulls data from Mongo and returns it to consumers. There is a layer of authentication involved and I am using Heroku to host the service. Mongo was being hosted on MongoLabs, but there were some significant performance concerns and so we have moved to hosting Mongo on one of our cloud servers. We want to be able to secure access to Mongo using a firewall, white-listing the ip address of the service app on Heroku.
There are a couple of issues with this.
Issues
Well, at least these are main ones...
Heroku, while providing some nice features like easily managing cluster settings, s/w upgrades, etc., draws ip addresses from a pool. While the dns value of an application's url may not change, the underlying ip address can and will change.
to be better secured, mongo-server01 is placed behind a firewall that requires rules to be added using static ip addresses to allow access.
Since Heroku can't provide static ip addresses, we need to consider options for how Heroku can access mongo-server01 while still protecting the data it hosts.
Static IP addresses for outbound requests
There seem to be a couple of options, specifically for Heroku. Fixie and QuotaGuard Static both seem to serve that function, but these seem to be geared toward HTTP and HTTPS communication only (perhaps not even HTTPS).
Mongo doesn't use HTTP, it uses its own network protocol over port 27017, by default
https://groups.google.com/forum/embed/#!topic/mongodb-user/eX_RIv2cZVw
Does this mean these proxies won't work for calls to Mongo? In theory, there doesn't seem to be any reason that a proxy is only for HTTP or HTTPS requests. That being said, there doesn't seem to be any way to get in to these Heroku plugins and configure the proxy to use a different port or to handle Mongo's particular protocol.
If we could get into the proxy, perhaps we could put an additional set of ssh keys in place so the ssl tunnel chain could continue on to mongo-server01. But there doesn't see to be any way to ssh to these proxies or access configuration through the plugin dashboards.
The question (finally!)
How can one connect from Heroku to a firewalled host to get data from MongoDb? Are there proxies that can be used to achieve this?
The simple approach. Won't work because Heroku applications don't use static ip addresses.
Using a proxy. The Heroku proxy plugins don't know how to proxy mongodb protocol. Can't install ssh keys on proxy for ssh tunneling.
What can be done to get a connection without opening up the Mongo server to the world?
I spoke with the folks at QuotaGuard and they do have something that does the trick.
we offer a SOCKS proxy which should do the trick as it proxies at the TCP layer
https://devcenter.heroku.com/articles/quotaguardstatic#socks-proxy-setup
I did need to make a simple change to bin/qgsocksify
#SOCKS_DIR="$(dirname $(dirname $(readlink -f ${BASH_SOURCE[0]})))/vendor/dante
SOCKS_DIR="${HOME}/vendor/dante"
After that, the proxy worked like a charm.

Can create a remote server with MongoDB? How?

My question, to be more clear, it is to create a server with mongodb on a cloud hosting (for example) and access it through another server.
Example:
I have a mobile app.
I hosted my mongoDB a cloud hosting (ubuntu).
I want to connect my app to the db on the server cloud.
Is it possible? How?
I'm joining this learning and my question was exactly MongoDB to create a server in a way that I could access it remotely.
Out of "localhost"? Different from all the tutorials I've seen.
From what you are describing, I think you want to implement a 2-Tier-Architecture. For practically all use cases, don't do it!
It's definitely possible, yes. You can open up the MongoDB port in your firewall. Let's say your computer has a fixed IP or a fixed name like mymongo.example.com. You can then connect to mongodb://mymongo.example.com:27017 (if you use the default port). But beware:
Security You need to make sure that clients can only perform those operations that you want to allow, e.g. using MongoDB integrated authentication, otherwise some random script kiddie will steal you database, delete it, or fill it with random data. Many servers, even if they don't host a well-known service, get attacked thousands of times per day. Also, you probably want to encrypt the connection so people can't spy on the connection. And to make it all worse, you will have to store the database credentials in your client app, which is practically impossible to do in a truly secure way.
Software architecture There is a ton of arguments against this architecture, but 1) alone should be enough. You never want to couple your client to the database, be it because of data migrations, software updates, security considerations, etc.
3-Tier
So what to do instead? Use a 3-Tier-Architecture: Host a server of some kind on mymongo.example.com that then connects to the database. That server could be implemented in nginx/node.js, iis/asp.net, apache/php, or whatever. It could even be a plain old C application (like many game servers).
The mongodb can still reside on yet a different machine, but when you use a server, the database credentials are only known to the server, not to all the clients.
Yes, it is possible. You would connect to MongoDB using the ip address of your host, or preferably using it's fully qualified hostname rather than "localhost". If you do that, you should secure your MongoDB installation otherwise anyone would be able to connect to your MongoDB instance. At an absolute minimum, enable MongoDB authentication. You should read up on MongoDB Security.
For a mobile application, you would probably have some sort of application server in front of MongoDB, e.g. your mobile application would not be connecting to MongoDB directly. In that case only your application server would be connecting to MongoDB, and you would secure MongoDB accordingly.

Using any/fake domains with ejabberd

I've recently purchased a cloud server which has public IP and I am using it to host an xmpp server.
My first task was to ensure my users connected using my subdomain - as an example m.chat.com.
In my configuration I have the following:
%% Hostname
{hosts, ["m.chat.com"]}.
I then created an admin user with that domain.
In parrellel I have created the following DNS record with my host provider, hostgator for my subdomain m.chat.com
Name TTL Class Type Record
m.chat.com 14400 IN A [IP of the server]
One thing that puzzled me was my ability to access the ejabberd web admin console. This was achieved via: [IP of the server]:5280/admin however I could not access it via m.chat.com:5280/admin
That aside, inside the web console, under "Virtual Hosts" I could see the host "m.chat.com". I created a user "user#m.chat.com" and tried to connect via Adium.
Inside Adium, simply typing in user#m.chat.com with the password did not work. Instead I had to also specify the "Connect server" which in this case was the [IP of the server].
It has connected fine and I have registered other users to check everything is working and it is.
Then I thought I'd go back to the ejabberd configuration and start messing around. I changed the hostname to the following:
%% Hostname
{hosts, ["m.chat.com", "facebook.com"]}.
I registered a user with that domain and restarted ejabberd. Upon checking the web console, to my surprise, I could see the Virtual host "facebook.com". I tested this user in Adium with the [IP of the server] defined in the "Connect server" section and it connected fine. I asked other people with their own internet connections to use this account on their PCs and they were able to connect too.
Story over - my question to everyone is how is this possible? Am I missing something? Is there no domain authentication. After searching online, it seems you can even use fake domains.
If I am to operate my own service in the future (iOS chat app) I do not want anyone using my domain names with their own public servers.
Can someone shine some light.
Thanks!
Edit: A second question - Preferably I do not want to have to define the "Connect Server" upon using a client. I would like the client to recognise the #m.chat.com domain and establish a connection to the Servers IP automatically. Have I configured my DNS record correctly? For anyone else using Hostgator, is there an additional task I must do?
Edit: I can now access the web console via m.chat.com:5280/admin and I no longer have to specify the Connect server when using a client. I didnt do anything, I think it was a case of Hostgater updating the DNS or something, they say it usually takes 4 hours. However I am still slightly puzzled as to why I can create accounts with the facebook.com domain. I understand that because I can not access the DNS admin for this domain I can not create any records but that does not prevent me from using the domain and just specifying a Connect server.
Your initial problems (unable to access the server by using m.chat.com) were almost certainly DNS issues, and it seems you have isolated that down to the time taken to update the record.
Your second question - about the fact that you can name virtual hosts without restriction, is simple but interesting. What makes you think there should be any kind of restriction? It would be like you dictating that I can't save "m.chat.com" in a file on my disk, or that I can't send "m.chat.com" in a message across the internet.
This is why DNS exists and is structured the way it is. Although I can tell my server that it hosts facebook.com, nobody will connect to it because the DNS record for facebook.com does not point at my server (users generally don't set the "connect host" manually). Which begs the question... why would I want to tell my server it hosts facebook.com, and if I did, why should Facebook care?
An additional, but relevant, identity layer on top of DNS are certificates - which clients should validate for the virtual host name in spite of any "connect host" set. Since it's not possible to have a certificate for facebook.com, clients should generally pop up warnings or fail to connect at all. If they don't, they're probably not validating the certificate correctly.