Avoid multiple transactions within single execution when updating entities in MongoDB - mongodb

I am working with 2 MongoDB collections, Recipe, and Menu. A single Menu is a combination of Recipe. Refer the below code segment for more information
#Document
public class Recipe {
private String id;
private String name;
private String description;
// getter and setter
}
#Document
public class Menu {
private String id;
private String name;
private List<RecipeItem> recipeItem;
// getter and setter
}
public class RecipeItem {
private String id;
private String name;
private String description;
// getter and setter
}
RecipeItem is just a copy of the Recipe object which is referred within the Menu collection
When the Menu collection is saved, you can add recipes to the menu and therefore a list of Recipe objects will also be saved within the Menu collection in the name of RecipeItem. When any of the Recipe is updated, the corresponding RecipeItem which is in the Menu is also required to be updated. Otherwise, the recipe within the Menu becomes outdated compared to the current Recipe after updating. So I have to iterate Menu collection which contains the updated Recipe by Id and needs to update the recipe information within the Menu collection.
So the update Menu function will initiate multiple transactions within the single execution and therefore we are in a need of a rollback mechanism as well. So I am not very fond of this approach.
I am new to MongoDB and I want to verify whether the current database design of Menu and Recipe is correct or incorrect? If yes what will be the optimal way of doing it? I know that use a DB ref between collections can be used, but there is a performance impact on it.

The Menu document should store a list of Recipe s IDs rather than the recipes themselves. Then you can dispense with RecipeItem and use Recipe directly.
It would seem more sensible that a Recipe consists of RecipeItems (Apple tart consists of flour, sugar, eggs, apples etc.).
In any case a reference would remove the need to keep two lists in sync.

Related

How to create one to many relation that will be updated?

I have the following entities with relation one to many - one folder can have many files.
folder
{
string id;
string name;
string owner;
}
file
{
string id;
string name;
}
How can I create this relation by taking into consideration the following rules?
The folder is a separate entity, which can be updated or displayed on its own, so it can't be part of the file document, it must remain in separate collection.
When I query all the the files, I want them to be sorted by the folder.name. This query must have pagination.
I want to display only files that are in folders owned by the caller.
As I don't want to use join in document database, and I want to get all the data in one query, I was thinking about copying the folder document in the file:
file
{
string id;
string title;
Folder folder;
}
This way, I can query by including all the listed rules. And I think this would be perfect solution, if the folder name was static, but the folder.name is something that the user can update. And if the folder.name is updated, the relation in the file will remain with invalid/old data.
So, what is the correct approach in this situation? Should I implement a mechanism for updating the relation ones the folder.name is updated?
Update:
I am expecting, a lot of queries for listing of the files, and less updates of the folde.names. The most important think for me, is that I need a design that will allow me to query, sort and use pagination on the files collection. So, at this point I am researching what are the options with their pros and cons.

Spring Mongo DB #DBRef(lazy=true) - How to lazy Load

I have a model like the one below (assume as pseudo code )
class Student {
#Id
private String id;
private String firstname;
.....;
#DBRef(lazy=true)
private College college
// getters and setters
}
class College {
#Id
private String id;
private String name;
// other attributes.
// getters and setters
}
I am using #DBRef(lazy=true) so that I do not load the college associated with the student. For example: if I have a repository method for Student called findByFirstname(String firstname), I can load the student without the college.
However, at times I would also want to load the student with college. Is it possible to write a repository method with a custom query using the #Query annotation (org.springframework.data.mongodb.core.query.Query) where I can load the student (all fields) and also the associated college instance ?
#Query( what should go here ?)
Student findStudentWithCollege(String firstname)
If no, then what would be a suggested way to load lazy documents on demand ?
As per the documentation
"DBRefs can also be resolved lazily. In this case the actual Object or Collection of references is resolved on first access of the property. Use the lazy attribute of #DBRef to specify this. Required properties that are also defined as lazy loading DBRef and used as constructor arguments are also decorated with the lazy loading proxy making sure to put as little pressure on the database and network as possible." I guess this may not be suitable for cases where one would want to load a student whose last name is "Smith" along with the college instance for each of the students retrieved.

Using immutable, deduplicated EntityTypes in EF

Suppose I have a CRUD application that lets a user manage their album collections:
class Collection
{
int Id;
string Name;
List<Album> Albums //EF navigation property
}
class Album
{
int Id;
List<CdCase> Cases; //EF navigation property
string Name;
string Artist;
}
In this application, I let users add, edit, and delete albums to their collections. The collections and albums are stored in a SQL Server database using Entity Framework.
Naively implemented, there are going to be a lot of duplicate albums, so I'd like to do the following at collection-save time:
Deduplication: If an album already exists (as determined by Name/Artist equality), the collection uses that album instead of creating a new one
Immutability: If an album is edited, the edits are not persisted to the server. Instead, the album is removed from the case and a new one is created/linked.
Garbage Collection: If an edit/delete operation results in an album no longer being in any collection, it is deleted from the database entirely.
Is there a way to implement this logic at the DBContext level (i.e. changing the Set<Collection> behavior), rather than manually cleaning up the albums before submission?
EDIT I am using code-first EF 6.1.

Create index in correct collection

I annotate a document with #Index(unique = true) like so:
public class ADocumentWithUniqueIndex {
private static final long serialVersionUID = 1L;
#Indexed(unique = true)
private String iAmUnique;
public String getiAmUnique() {
return iAmUnique;
}
public void setiAmUnique(String iAmUnique) {
this.iAmUnique = iAmUnique;
}
}
When saving the object, I specify a custom collection:
MongoOperations mongoDb = ...
mongoDb.save(document, "MyCollection");
As a result I get:
A new document in "MyCollection"
An index in the collection "ADocumentWithUniqueIndex"
How can I create the index in "MyCollection" instead without having to explicitly specify it in the annotation?
BACKGROUND:
The default collection name is too ambiguous in our use case. We cannot guarantee, that there wouldn't be two documents with the same name but in different packages. So we added the package name to the collection.
Mapping a document to a collection is dealt with in an infrastructure component.
The implementation details like collection name etc. shouldn't leak into the individual documents.
I understand this is a bit of an "abstraction on top of an abstraction" smell but required since we had to support MongoDb and Windows Azure blob storage. Not anymore though...
This seemed like a fairly standard approach to hide the persistence details in a infrastructure component. Any comments on the approach appreciated as well.
It's kind of unusual to define the collection for an object to be stored and then expect the index annotations to work. There's a few options you have here:
Use #Document on ADocumentWithUniqueIndex and configure the collection name manually. This will cause all objects of that class to be persisted into that collection of course.
Manually create indexes via MongoOperations.indexOps() into the collections you'd like to use. This would be more consistent to your approach of manually determining the collection name during persistence operations.

Hibernate Search - Reindex when associated object is updated

I am using Hibernate Search. I have two classes Article and Publisher.
public class Publisher {
private String name;
}
public class Article {
private Publisher publisher;
private String title;
private String description;
}
I want to create an index for a merged field that contains all fields in Article class and name field in Publisher class.
A requirement is that when publisher name is changed and persisted to the database, all the articles from that publisher would need to be re-indexed as well. How do I accomplish this? Many thanks!!!
You would use #IndexedEmbedded and #ContainedIn. The former on publisher in Article and the latter on articles in Publisher. At the moment you don't have articles field in Publisher, but to make this work you need a bidirectional link.