I'm trying to store configuration in MongoDB. I want the document schema to be dynamic so as to store different type of configuration in the collection. The configuration may consist of more than just simple string key-value pairs. While using spring-data-mongodb, I see that I need to define a class which is usually mapped to a mongodb. So, when I need to add more configuration to the collection, I need to make changes to the class. I don't really want to do this as I want to be able to modify configuration without code changes (and ideally without restarting long-running applications). Also, what I'm eventually storing is configuration which should be consumed by different services, so I can't really have a well-defined schema. Instead, I would want the services to pull configuration from the store (i.e. provide key, get value). This makes me doubt where spring-data-mongodb is the right choice for such a use-case. Is there any obvious solution or alternative to my use-case?
Thanks in advance.
The obvious solution is use just the Java driver for MongoDB. The Java driver has an implementation of BSON's spec and you can work with BSON/JSON objects instead classes.
Related
So both of them are part of MongoDB features that I think have common nature. In my case, every time a document is created or updated, it will trigger a function that will update the document field with Date.now() timestamp.
It can be achieved using a trigger, but there are 2 ways to do it, and I am not sure which one is suitable to choose. What is the difference between MongoDB Realm Trigger and MongoDB Atlas Trigger? Advantages over each other?
Thank You
They are inherently similar. The best way to think of it is two different GUI's that uses the same(ish) backend code.
Apart from authentication triggers that only exist on realm the other two types both work in similar ways.
They are both "triggered" by the same event (type) wether it be a cron expression or a database event and they both execute a realm based function (either pre-saved in realm or saved on the trigger in atlas. So the only actual difference comes from the configuration options, for example:
atlas trigger can connect to multiple clusters while realm must choose a single one.
realm has a project option available.
realm accepts a function name (as it's already saved) while atlas requires the actual code saved. (If for some reason you want the same code executing for different triggers realm is more stable as updating 4 different triggers due to code change is not fun)
You can compare the confirguration options yourself here for realm and here for basic trigger
I have personally haven't noticed a difference between the two (nor did I look that deep into it), I feel that Apart from inside knowledge from an engineer in Mongo that can spill the beans whether or not there's an actual performance different or if both triggers use the same code base there is not much to say on the subject.
My SpringBoot API is supposed to read data from a collection of one database and before returning response back, it is supposed to insert a document in a collection of another database.
I am looking for a quick and efficient way to do this. I searched and found that I can make two entries in my application.properties and create two different Mongo template connection using those. But I am looking for a more clean and compact way to do this (if any).
Refer
https://github.com/Mohit-Hurkat/spring-boot-multi-mongo
it's by using two templates (but a clean way and simple to do this)
https://github.com/Mohit-Hurkat/multi-tenant-spring-mongodb
You can use change stream concept in mongodb..
If you have any change in database it automatically drop the changes in another database
I am aware of how we create POJO classes (Java) and map them to the schema of the data in MongoDB, and create a connection with spring data. But if I don't have a specific schema and I want to have MongoDB as a back end for my cache in Hazelcast, how do I do that? In my use-case specifically, I have a cache which needs to keep mongodb updated with whatever updates it comes across.
Check this out:
https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/mongodb
Do note that this is a sample code that is meant for referential purposes only, do not copy paste into your production system.
We have a large application with hundreds of classes/enums, and we want to use MongoDB to store some of these.
The situation is that there is a current system whereby we binary serialize the .NET object into a field in a SQL database, then deserialize on demand. What we want is put the object into Mongo in a way that will allow us to query the object's properties directly (ie. without having to load the object into memory, deserialize, etc.). This is so we can start to get some analytics from the historic data without having to drastically change the code base.
My question is, is this something that easily possible? are there in built serializers in the C# driver to do this?
I'm also open to answers that propose a better way to do this if what I'm trying to do is inherently wrong.
Update: to be clear, what I'm trying to do is take an object that has been loaded using NHibernate, and insert it into Mongo as a Queryable object. Ultimately, I'll want to load it back into memory at some point too.
MongoDB is basically a store of JSON documents, so if you can serialize your objects in a JSON way, you should be ok to store it in MongoDB, and I assume there are lots of JSON serializers for .NET, so should be easy to find one.
Once everything is stored as JSON in MongoDB you will be able to query it without any more tools that the ones to query the database directly.
Regards,
You can use Simple.Data.MongoDB a lightweight, dynamic data access .NET component for MongoDB
How do you manage a major schema change when you are using a Nosql store like SimpleDB?
I know that I am still thinking in SQL terms, but after working with SimpleDB for a few weeks I need to make a change to a running database. I would like to change one of the object classes to have a unique id, as rather than a business name, and as it is referenced by another object, I will need to also update the reference value in these objects.
With a SQL database you would run set of sql statements as part of the client software deployment process. Obviously this will not work with something like SimpleDB as
there is no equivalent of a SQL update statement.
Due to the distributed nature of SimpleDB, there is no way of knowing when the changes you have made to the database have 'filtered' out to all the nodes running your client software.
Some solutions I have thought of are
Each domain has a version number. The client software knows which version of the domain it should use. Write some code that copies the data from one domain version to another, making any required changes as you go. You can then install new client software that then accesses the new domain version. This approach will not work unless you can 'freeze' all write access during the update process.
Each item has a version attribute that indicates the format used when it was stored. The client uses this attribute when loading the object into memory. Object can then be converted to the latest format when it is written back to SimpleDB. The problem with this is that the new software needs to be deployed to all servers before any writes in the new format occur, or clients running the old software will not know how to read the new format.
It all is rather complex and I am wondering if I am missing something?
Thanks
Richard
I use something similar to your second option, but without the version attribute.
First, try to keep your changes to things that are easy to make backward compatible - changing the primary key is the worst case scenario for this.
Removing a field is easy - just stop writing to that field once all servers are running a version that doesn't require it.
Adding a field requires that you never write that object using code that won't save that field. If you can't deploy the new version everywhere at once, use an intermediate version that supports saving the field before you deploy a version that requires it.
Changing a field is just a combination of these two operations.
With this approach changes are applied as needed - write using the new version, but allow reading of the old version with default or derived values for the new field.
You can use the same code to update all records at once, though this may not be appropriate on a large dataset.
Changing the primary key can be handled the same way, but could get really complex depending on which nosql system you are using. You are probably stuck with designing custom migration code in this case.
RavenDB another NoSQL database uses migrations to acheive this
http://ayende.com/blog/66563/ravendb-migrations-rolling-updates
http://ayende.com/blog/66562/ravendb-migrations-when-to-execute
Normally these type of changes are handled by your application that changes the schema to a newer one upon loading version X and converting to version Y and persisting