We have a recently sharded mongodb cluster. Before sharding, for read-only access, all users used to connect to one of the secondaries. We need a similar read-only access now when users connect to 'mongos' (after sharding). One option is to enable authentication and add user user roles. But that will mean changing java code on some app module which connect to the mongos using the java connector.
Is there a way to obtain read-only access without enabling authentication ?
You can only create read-only roles by enabling authentication. If you do not want to enable authentication, your human users have to specify their read preference explicitly when connecting. As they may forget this, I would advise you enable authentication. It will allow you more granular access in the future and will allow users to continue the data when the secondary servers are down (e.g. maintenance).
Related
Here is my scenario - I have a webapp and mongodb running on same host. And I have not enabled authorization in mongod.conf. So, my webapp, connects to mongodb with out any authentication. Now I want to provide access to mongodb for certain group of people who will connect from outside. Since, connections will be made from outside, I need to enable authentication. But, if I enable authentication webapp will not be able to connect to mongodb(which assumes mongodb is running on localhost and it does not require authentication). I do not want to change webapp to connect to mongodb with authentication. So, I want to disable authentication only for connections from localhost. Is it possible?
No, it's not possible from Mongo 3.0 version
The only case where localhost authentication bypass occurs is when there are no configured users, with enableLocalhostAuthBypass parameter (Enabled by default).
Your scenario can only be solved by creating multiple roles / users with different privileges
In the beginning of the year, lots of MongoDB databases were hacked. This also included my database. Yesterday I noticed my brand new database with authorization enabled was hacked as well. The username and password is very secure (16+ characters password with random characters and symbols).
I've now decided to fully secure it, but I honestly don't know where to proceed. I already have:
security:
authorization: enabled
and that should be enough (after sudo service mongod restart). I only have 1 database and no admin user, but anonymous access from a remote connection is still allowed. I keep reading many places, that I should run mongod like mongod --auth, but that it's the same as enabling authorization as I've done above.
At this point I'm struggling to disable anonymous authentication on the server. What did I miss? Why can I authenticate without an account?
To enable security you'll want to follow the Security Checklist on the MongoDB Website.
Here you are provided with role based authorization and authentication instructions. It's also advised you disable listening to all ethernet interfaces and bind your MongoDB ports to the interfaces you'd like exposed.
For a guide to network hardening, you will want to review these instructions, but the most important aspect is to avoid unwanted network exposure. Consider using a firewall or security groups (if in cloud).
I've got a spring boot app which is connected to mongodb atlas.
Everything is working locally.
I now want to publish this to pivotal cloud foundry.
Secure connection between PCF and atlas
In mongodb atlas I need to open up the firewall an allow certain ip numbers.
How should I configure mongodb atlas to connect to pcf in the most secure way?
Autoconfigure getting in the way
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster.
What is the recommended way to connect to mongodb atlas?
In mongodb atlas I need to open up the firewall an allow certain ip numbers. How should I configure mongodb atlas to connect to pcf in the most secure way?
White listing IP addresses for applications that run on CF is not particularly effective. The reason it's not effective is that you don't know the IP address from which you'll be connecting, because it depends on where Diego decides to run your application. In other words, it depends on the cell where your application is told to run. To compound matters, that will change when you restart / restage your application.
Because the IP can vary, what you end up needing to do is white list all of your Cells. The problem with this and why it's not effective is that you've ended up white listing every app running on the platform.
What you can do to improve the security a bit is to make use of application security groups. ASG's can be used to limit outgoing traffic. You can also control them at the space level. That means you can configure your default running security group to not allow access to your MongoDb server, but you can allow access for individual spaces by binding an ASG to only those spaces with apps that need to talk to your MongoDb servers.
The downside of this approach is that it requires you to be a platform administrator, which means it will only work if you own your CF installation (not going to work for public providers).
More on ASG's here: https://docs.cloudfoundry.org/adminguide/app-sec-groups.html
For public providers, you can use a proxy. To make this work, you need to have your application configured to talk through a proxy when it attempts to access your Mongodb servers. You control the proxies, which have fixed IPs, so you can white list the proxies to allow access to just your app. If you don't want to run your own proxy servers, there are public proxy providers that you can use.
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster. What is the recommended way to connect to mongodb atlas?
It's possible to disable auto configuration. One way is described in the docs here. If you include the Spring Cloud Connectors dependencies and use them manually, then the auto configuration will not run.
https://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html#manual
The other option is to tell the Java build pack not to install the auto configuration. You can do that by setting the following environment variable for your application, either with cf set-env or via a manifest.yml file.
Ex: JBP_CONFIG_SPRING_AUTO_RECONFIGURATION='[enabled: false]'
Be careful if you do this as it will disable everything provided by the auto reconfiguration, which includes setting the "cloud" profile for your app. If you use this option to disable auto reconfiguration, you'll probably also want to set SPRING_PROFILES_ACTIVE='cloud' to manually enable the cloud profile.
I suppose your other option is to simply embrace the auto configuration. It's a little confusing / magical at first, but I've found this article to explain it very well.
https://spring.io/blog/2015/04/27/binding-to-data-services-with-spring-boot-in-cloud-foundry
Hope that helps!
I'm using MongoDB 3.2.6 and I want to use Keyfile Access Control for MongoDB replication.
What I read in this link:
https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/
Enforcing access control on a replica set requires configuring Security between members of the replica set using Internal
Authentication
Unfortunately I can not find in the link below how I enable Internal Authentication:
https://docs.mongodb.com/manual/core/security-internal-authentication/
Should I configure auth = true in the mongo configuration file (and configure users)?
How I enable Internal Authentication?
The opposite question:
If I will enable configure auth = true in the mongo configuration then I have to use Keyfile Access Control for the MongoDB replication (otherwise the MongoDB replication will not work).
Correct?
There is not a separate option to enable internal authentication. Basically, internal authentication and client authentication need to be either both enabled or both disabled.
Note, that specifying the keyFile will implicitly enable authentication (e.g. setting auth=true is redundant/implied and not required). But setting both keyFile and auth is probably a good idea to avoid confusion.
When authentication is enabled and you are running a replica set or sharded cluster, then you must utilize one of the internal authentication mechanisms to allow the members to authenticate and communicate with each other. Meaning you will need to either use a keyFile or x.509 authentication in order for replication/sharding to work.
I am looking for a single sign on approach for an ODBC connection to a Postgres database.
The plan is to login to a web application and then use a a single sign on scheme such as oauth or CAS to automatically login to a client application.
The client application does not verify the credentials itself, but uses them via ODBC to connect to the Postgres database server. Unlike web applications we cannot use a single databaes user here, but need individual database accounts for security reasons.
In theory Postgres does support PAM and PAM supports both CAS and oauth. But I was not able to find any documentation on that. Especially the part of how to specify the token in ODBC is unclear to me.
With PAM auth, keep in mind that this is a broad field and books could be written about it. I do something similar to what you do though and can answer the part about ODBC. The following provides a walkthrough for a related service you may find helpful:
http://www.wikidsystems.com/support/wikid-support-center/how-to/how-to-secure-postgresql-using-two-factor-authentication-from-wikid
The big thing to remember is that with PAM the password provided is passed on to the PAM module, so you have to pass in the username and password. This gets sent to PAM as if the user was logging on to the system. Beyond that it's up to you to configure PAM appropriately for your service.