So both of them are part of MongoDB features that I think have common nature. In my case, every time a document is created or updated, it will trigger a function that will update the document field with Date.now() timestamp.
It can be achieved using a trigger, but there are 2 ways to do it, and I am not sure which one is suitable to choose. What is the difference between MongoDB Realm Trigger and MongoDB Atlas Trigger? Advantages over each other?
Thank You
They are inherently similar. The best way to think of it is two different GUI's that uses the same(ish) backend code.
Apart from authentication triggers that only exist on realm the other two types both work in similar ways.
They are both "triggered" by the same event (type) wether it be a cron expression or a database event and they both execute a realm based function (either pre-saved in realm or saved on the trigger in atlas. So the only actual difference comes from the configuration options, for example:
atlas trigger can connect to multiple clusters while realm must choose a single one.
realm has a project option available.
realm accepts a function name (as it's already saved) while atlas requires the actual code saved. (If for some reason you want the same code executing for different triggers realm is more stable as updating 4 different triggers due to code change is not fun)
You can compare the confirguration options yourself here for realm and here for basic trigger
I have personally haven't noticed a difference between the two (nor did I look that deep into it), I feel that Apart from inside knowledge from an engineer in Mongo that can spill the beans whether or not there's an actual performance different or if both triggers use the same code base there is not much to say on the subject.
Related
Is there a way to see the current state of the aggregates stored in axon?
Our application uses a Oracle backed axon event store.
I tried querying the domainevententry and snapshotevententry tables, but they are empty.
Is there a way to see the current state of the aggregates stored in axon?
In short, yes, although it is not recommended. Granted, if you are planning to employ CQRS. CQRS, or Command-Query Responsibility Separation, dictates that the Command Model and the Query Model are separate.
The aggregate support Axon delivers supplies an easy means to construct a Command Model. As the name suggests, it's intended for commands. On the flip side, you have Query Models, which are designed for queries. AxonIQ has this to say on CQRS; maybe that clarifies some things.
I tried querying the domainevententry and snapshotevententry tables, but they are empty.
That's interesting on its own account! When you publish events in Axon, either through the AggregateLifecycle#apply(Object...) or EventGateway#publish(Object...) method, the published event should end up in your domain_event_entry table. If that's not the case, then either your JPA/JDBC configuration has a misser or some other exceptions occurring in your application.
Would you be able to update your issue with samples of your configuration and/or stack traces that you are seeing?
Replaying production issues locally
What I've done in the past to be able to replay behavior occurring in a production environment is by loading the Aggregate's event stream from that environment into a local dev/test event store. To be able to query this, you only need the aggregate identifier. As the aggregate identifier is indexed, retrieving all events for a specific aggregate (differently named, the aggregate stream) is straightforward.
By doing so, I could run the application locally to flow through the aggregate step-by-step. This gave the benefit of knowing exactly which event caused what state change, leading to the problematic scenario.
However, why your events are not present in your domainevententry is unclear to me. If you're still facing issues with that, I still recommend that you update the question with more specifics on your project.
My SpringBoot API is supposed to read data from a collection of one database and before returning response back, it is supposed to insert a document in a collection of another database.
I am looking for a quick and efficient way to do this. I searched and found that I can make two entries in my application.properties and create two different Mongo template connection using those. But I am looking for a more clean and compact way to do this (if any).
Refer
https://github.com/Mohit-Hurkat/spring-boot-multi-mongo
it's by using two templates (but a clean way and simple to do this)
https://github.com/Mohit-Hurkat/multi-tenant-spring-mongodb
You can use change stream concept in mongodb..
If you have any change in database it automatically drop the changes in another database
I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.
I have some collection and I want to perform action on every insert into that collection. The problem is that the code, that will do this actions is in Java. In Oracle it was possible to wrap Java or even C code into PL/SQL procedure, and then use this procedure in trigger. In CouchDB we could write a view. What would be the closest analog for MongoDB?
The best possibility I can think of is to wrap my code into REST server, and then interact with it using stored javascript.
I've already seen this question, but due to dependency on java libs, I can't use just javascript in my workflow, neither I don't want to run a new heavy service along with mongodb if there is some other way to do this.
There are a number of things to say about your request:
I have some collection and I want to perform action on every insert into that collection.
1) What you're asking for here is not really a "stored procedure", but really is a "database trigger". MongoDB does not provide any sort of "database trigger" functionality.
This is consistent with the general design goals of MongoDB, which is to provide a very fast, scalable data store without the heavy weight of traditional DBMS systems. See this presentation for more details about the design goals of MongoDB: http://www.10gen.com/presentations/mongosf2011/whymongodb
2) If there is some data processing that you'd like to perform on every insert, you'll need to do it on the client side of the MongoDB connection. This will necessarily involve writing some code in your application.
3) I'd suggest that you avoid running JavaScript within the mongod server if at all possible. The JavaScript is interpreted on the server side, so the speed of your queries will be affected. In addition, all JavaScript run in the mongod server is single-threaded, so there is no concurrency of any JavaScript execution.
I wish I had a better answer for you.
I'm kind of new to mongodb and NoSQL data design in general.
I'm building a mongodb database that will have some denormalized data. For exemple, my "User" documents contains a reference (just the id) to zero or more "Article" documents and my Article documents contains references to zero or more users.
Since I'm using the repository pattern, no parts of my Data Access Layer knows about Articles AND Users. Where in my code should I check to make sure that all my documents are consistent with each others? Should I simply let the DAL's users code do the checks?
Would it be a good idea to have a Data Integrity Script run once in a while to check if everything is consistent?
Here is Microsoft's write-up on the Repository Pattern. From that document:
Use a repository to separate the logic that retrieves the data and maps it to the entity model from the business logic that acts on the model.
You have a couple of questions:
Where in my code should I check to make sure that all my documents are consistent with each others?
Based on the statement above, I think it's clear that this logic belongs in the Repository. The relation between these objects only exists at the layer of "business logic", the database cannot enforce these types of rules.
Should I simply let the DAL's users code do the checks?
How could they? As the writer of the repository, you are the DAL user. For MongoDB, the DAL is basically the driver.
You could possibly write a wrapper around the driver that would wrap the multiple writes in some form of transactions. But you would have to write this, MongoDB has no notion of transactions.
Would it be a good idea to have a Data Integrity Script run once in a while to check if everything is consistent?
At the end of the day, whoever writes the repository is going to be responsible for the integrity of the data. Such a script might be useful, but it would definitely suck a lot of CPU cycles.
My suggestion for N:M mappings is to start building some basic blocks for handling the multiple writes that are required to keep these two in sync. One idea is to Queue the changes and let a background job make the updates. This way you don't have to worry about multiple writes and roll-backs causing bad data.