Below code gives error and it says School class must implement DBObject interface. The problem is that this interface has tons of methods. I have nearly 100 class and I don't want to write millions of methods. Is there any easy way to save an object?
DBCollection table = db.getCollection("school");
School document = new School();
table.insert(document);
Instead of implementing DBObject or extending one of the existing implementations like BasicDBObject, you could have all objects which can be saved in the database have a method public DBObject toDBObject() which creates and returns a DBObject representation of the object. The BasicDBObject is a Map<String, Object> which handles the object data as key/value pairs, so it is a good candidate for this.
For a more generic solution, you could use reflection to create a method which can convert any Java object into a DBObject. To have more control over this, you could make up some annotations, add them to your classes and have your conversion method check them.
Now you have created your own object mapping framework for MongoDB. But why reinvent the wheel when others have already done it? So before you do this, check out if the existing mapping frameworks like morphia fulfill your use-case - they likely do and will save you hours of programming and weeks of debugging.
[opinion]
I usually despise object-relational mappers in the context of relational databases because of the impedance mismatch problem, but for heterogeneous databases like MongoDB they make a lot more sense, because you can store objects which have the same base-class but also some different class-specific fields in the same table collection without any ugly workarounds.
[/opinion]
Related
Using Entity Framework and Cosmos DB, I want to store an object that has a property of type HashSet<Guid>. I got an exception message that Guid is not a primitive type and collections of it cannot be mapped. OK, I wrote a pair of ValueConverter/ValueComparer for HashSet<Guid>, but that feels... not very elegant. Is there a better way to do it? I also have a collection of small custom objects that can be stored as a string with a certain syntax. I tried setting a converter/comparer in the ConfigureConventions but it didn't work either. Is it possible to store an ICollection<MySmallType> as a collection of strings? Thank you!
I have this situation:
Spring Data JPA: Work with Pageable but with a specific set of fields of the entity
It about to work with Spring Data and working with a specific set of fields of an #Entity
The two suggestions are totally valid for me:
DTO projections
Projection interfaces
Even more, in spring-data-examples appears both together (I know for sample purposes):
CustomerRepository.java
Thus:
When is mandatory use one over the other and why?
Exists a cost of performance one over the other?
Note in the Class-based Projections (DTOs) section says the following:
Another way of defining projections is by using value type DTOs (Data
Transfer Objects) that hold properties for the fields that are
supposed to be retrieved. These DTO types can be used in exactly the
same way projection interfaces are used, except that no proxying
happens and no nested projections can be applied.
Seems the advantages are: except that no proxying happens and no nested projections can be applied
DTO Approach
Pro
Simple and straigt forward
Con
It will result in more code as you have to create DTO class with constructor and getters/setters (unless you utilize Project Lombok to avoid boilerplate
code for DTOs).
No nested projections can be applied.
Projections
Pro
Less code as it uses only interfaces.
Nested projections can be applied
Dynamic projection allows you write one generic repository method to return
different subset of the attributes in entity object based on client's needs.
Con
Spring generates proxy at runtime
Query could return the entire entity object from database to Spring layer though a trimmed version (via Projection) is returned from Spring layer to client. I wasn't sure about this specific disadvantage, hoping someone to edit this answer if necessary.
If you need nested or dynamic projection, you probably want Projection approach rather than DTO approach.
Refer to official Spring doc for details.
I think that DTO was the first possible solution to work with a small set of data from the Entities. Today, many operations can also be made with projections, but you need to be careful with performance. If you see this Janssen's post Entities or DTOs – When should you use which projection? you will note that DTOs have better performance than projections for reading operations.
If you don't have the problem with performance, projections will be more graceful.
I need to store Scala class in Morphia. With annotations it works well unless I try to store collection of _ <: Enumeration
Morphia complains that it does not have serializers for that type, and I am wondering, how to provide one. For now I changed type of collection to Seq[String], and fill it with invoking toString on every item in collection.
That works well, however I'm not sure if that is right way.
This problem is common to several available layers of abstraction on the top of MongoDB. It all come back to a base reason: there is no enum equivalent in json/bson. Salat for example has the same problem.
In fact, MongoDB Java driver does not support enums as you can read in the discussion going on here: https://jira.mongodb.org/browse/JAVA-268 where you can see the problem is still open. Most of the frameworks I have seen to use MongoDB with Java do not implement low-level functionalities such as this one. I think this choice makes a lot of sense because they leave you the choice on how to deal with data structures not handled by the low-level driver, instead of imposing you how to do it.
In general I feel that the absence of support comes not from technical limitation but rather from design choice. For enums, there are multiple way to map them with their pros and their cons, while for other data types is probably simpler. I don't know the MongoDB Java driver in detail, but I guess supporting multiple "modes" would have required some refactoring (maybe that's why they are talking about a new version of serialization?)
These are two strategies I am thinking about:
If you want to index on an enum and minimize space occupation, you will map the enum to an integer ( Not using the ordinal , please can set enum start value in java).
If your concern is queryability on the mongoshell, because your data will be accessed by data scientist, you would rather store the enum using its string value
To conclude, there is nothing wrong in adding an intermediate data structure between your native object and MongoDB. Salat support it through CustomTransformers, on Morphia maybe you would need to do the conversion explicitely. Go for it.
I will start a new project in couple of days which is based on ASP.NET MVC3 and I don't have enough experience in Web Development.
I just want to know about Entity Framework. What is Entity Framework? Why we use it?
and also want to know about Object Relational Mapping. How is it connect with entity framework?
I've googled but did not get exact idea about it.
I'm very eager to know what's basic concept behind all those things?
Entity Framework is an object-relational mapper. This means that it can return the data in your database as an object (e.g.: a Person object with the properties Id, Name, etc.) or a collection of objects.
Why is this useful? Well, it's really easy. Most of the time you don't have to write any SQL yourself, and iterating is very easy using your language built in functions. When you make any changes to the object, the ORM will usually detect this, and mark the object as 'modified'. When you save all the changes in your ORM to the database, the ORM will automatically generate insert/update/delete statements, based on what you did with the objects.
In code, you might want to work with objects in an object oriented fashion.
MyClass obj = new MyClass(); // etc.
However, it might be cumbersome to save data to databases from objects, since you might end up with mapping your object to an SQL query string
// Perhaps with parameter bindings instead, but the idea is the same
"INSERT INTO MYTBL name,phone VALUES(" + obj.Name + "," + obj.Phone + ")";
An ORM framework does this object to SQL mapping by generating SQL statements and the Entity manager will execute them when you need to save or load objects from the database. It comes at the cost of another abstraction layer, but it will make the code easier to write.
I've yet to use Morphia, but I'm considering it for a current project.
Suppose I have a POJO with a number of #Reference annotations and I ask Morphia to fetch the object graph from the database. If I then make another DAO or DataStore call and ask Morphia to fetch some object that was already instantiated in the first graph, would Morphia return a reference to the already instantiated object or would it create a new instance?
If Morphia returns a new instance of the object each time, does anyone have a recommendation of how to best approach creating a Morphia-backed repository that won't duplicate already-instantiated objects?
As I see it in Morphia, it will re read every reference.
This is one of the problems, why I created Morphium. I integrated a caching layer there, so if you read a reference, this one won't be read again (at least, if you search by ID...)
We use morphia in production and there are two ways to make sure you don't load the references which is something we came across too.
One is to use the lazy loading option when you define the #Reference element in your main class. This of course means that this behavior is 'global' to that object.
The better way to do this is to not define an #Reference using Morphia and instead managing the references yourself. Let me know if you need a code sample.
I've stopped using #Reference too and instead declare something like:
ObjectId itemId
rather than having a field item. This has 2 benefits: (1) it lets me define a getter through a helper getObject(...) method which I have written with object caching and (2) it stores a simple ObjectId in the Mongo object rather than a full DBRef which includes the collection name and thus about twice the data size.