How can you set a MongoDB write concern at database level that cannot be overridden? - mongodb

I'd like to prevent clients of a database using any write concern other than majority for a replica set. Is it possible to do this at a database level?

It is not possible to prevent the use of write concerns other than the default. Clients can specify a write concern for each CRUD operation and MongoDB's permission model is not granular enough to prevent this. If you really want to prevent the write-concern being changed by your client, you could create a REST API that wraps most common MongoDB operations, but prevents users from setting the write concern.

Related

What is the best way to maintain a redacted replica of my MongoDB for analytical and investigation purposes?

I have a production dataset in my MongoDB which I use to run my application, I would like to give my devs access to the data in this database but the database contains sensitive data which I don't want exposed to devs poking around in the database. I would also prefer that the devs don't have access directly to the prod database, but rather have access to a replica of it stored somewhere else.
Ideally, I would prefer to use some tool to maintain a perfect replica of my MongoDB database in another MongoDB database, however, with the replica being redacted so no sensitive data is present.
As a plus, it would be nice if the data could also be transformed and aggregated in different ways before it lands in the second database.
What would be the best way to go about doing this?
Set up a change stream. In the change stream listener, redact the new/updated documents and write them to the analytics instance.

How does data.stackexchange.com allow queries securely?

https://data.stackexchange.com/ lets me query some (all?) of
stackexchange's data/tables using arbitrary SQL queries, including
parametrization.
What program do they use to do this and is it published?
I want to create something like this myself (different data), but am
constantly worried that I'll miss an injection attack or set
permissions incorrectly.
Obviously, data.stackexchange.com has figured out how to do this
securely. How do I replicate what they've done?
This follows up my earlier question: Existing solution to share database data usefully but safely?

How to automatically dispatch read-only transactions to slave

I would like all queries from my Spring-Hibernate application executed in a read-only transaction to be dispatched to a PostgreSQL slave and all read-write transaction queries to a master.
While using annotation driven transactions in Spring, if the transaction is defined as read-only, the PostreSQL driver allows only select queries to be executed, which is obvious, however there is no mention of how the driver would behave in a master slave configuration. For e.g., the MySQL driver has a replication connection class which automatically dispatches read-only transaction queries to the slave.
One solution would be to use multiple Hibernate session factories and use the one pointing to the slave for selects and the other for updates, but that would be too much manual handling. How should I be designing this?
This is a surprisingly complex question and the answer is not simply easy. You need to keep in mind that you have to have this dispatched in such a way that the layer which does the dispatching knows whether a transaction is likely to be read-only or not.
The cleanest solution is probably to implement the dispatching in your middleware. This has the advantage of being a functional dispatch-- we know what we are trying to do so let's dispatch there... Of course functions can create a bit of a knowledge gap in what is read-only and what writes....
The second option is that one could probably dispatch with something like PGPool or the like. I would expect you would probably want to avoid server-side prepared queries in these cases because the more knowledge you provide the intermediate layer, the fewer problems you will have.

How to get a connection and hold it using DAAB?

I have a task ahead of me that requires the use of local temporary tables. For performance reasons I can't use transactions.
Temporary tables much like transactions require that all queries must come from one connection which must not be closed or reset. How can I accomplish this using Enterprise library data access application block?
Enterprise Library will use a single database connection if a transaction is active. However, there is no way to force a single connection for all Database methods in the absence of a transaction.
You can definitely use the Database.CreateConnection method to get a database connection. You could then use that connection along with the DbCommand objects to perform the appropriate logic.
Other approaches would be to modify Enterprise Library source code to do exactly what you want or create a new Database implementation that does not perform connection management.
Can't see a way of doing that with DAAB. I think you are going to have to drop back to use ADO.Net connections and manage them yourself, but even then, playing with temporary tables on the server from a client-side app doesn't strike me as an optimal solution to a problem.

is MongoDB and Ldap the same concept?

As Mysql, sql server, postgre sql etc are basically different implementation of the same concept (rdbms), I am wondering does the same relationship exists between LDAP and MongoDB/CouchDB etc, or is there something more into LDAP?
LDAP
Hierarchical Database model (based on parent/child relationships, like in XML)
LDAP is appropriate for any kind of directory-like information, where fast lookups and less-frequent updates are the norm
Scalable
Standard protocol
Not suited for applications that require data integrity (banking, ecommerce, accounting). Traditionally is used to store users, groups, SSL certificates, service addresses, but is a generic database and can be used for any information.
MongoDb
Document oriented Database, based on BSON (JSON-like) documents
Key value database, but values can be BSON documents
High performance in both read and write operations
Scalable (Master-Slave replication)
Custom protocol
Not suited for applications that require data integrity (banking, ecommerce, accounting)
CouchDb
Document oriented Database, based on JSON documents
Key value database, but values can be JSON documents
High performance in both read and write operations
Scalable (Master-Master replication with conflict resolutions)
REST protocol
Not suited for applications that require data integrity (banking, ecommerce, accounting)
The most important thing, which differs LDAP databases from other noSQL, like MongoDB or CouchDB, is very flexible ACL system.
For example, you can grant access to the object in the tree, using groups and users stored in the same tree. In fact, you can use objects itself to authenticate against the LDAP server.
IMHO, it is completely safe to allow clients to get access to the LDAP tree directly from the Internet without writing a string of code.
In the other hand, LDAP has a bit archaic design and uses sophisticated approaches to provide trivial operations. Mainly because of that fact, I'm slipping and dreaming, about someone implemented LDAP-like ACL in the any of modern noSQL database. Indeed, why making JSON-based database, if you cannot be authorized against it directly from the browser?
SCHEMA is one of the biggest differences.
LDAP data stores have a single system-wide extendable schema (which in real-world, is the the Achilles heel of ldap servers replication...).
NO-SQL has 'no schema' (-or- any schema per object, look at it however you want..).