I have an abstract class which is extended by large number of other POJOs, I need all these main POJOs to be stored in a dedicated collection.
My repository looks like this:
interface TimesliceRepository extends MongoRepository<AbstractTimeslice, String>
How can I make it so that objects are directed to the appropriate collection? Eg: AATimeslice, BBTimeslice, etc...
Or do I have to have a repository for every POJO?
Also, would read queries work? How would I be able to query for BBTimeslice only?
After some research, I came to the conclusion that for my uses cases its better to use MongoTemplate and not MongoRepository.
Related
I want a generic object which can take different fields since we provide the product to different companies and companies sometimes want their own custom fields stored. We can use a separate map to store those company specific fields but I want to know the limitations of extending Document and implementing Book interface for example. I know we can use Document class for creating repositories in spring data mongo.
I use Spring data by extending SimpleJpaRepository, Sometimes we need only a few special fields of an entity on sometimes other fields. if we create a projection class or interface for every need, there will be many classes that are used only for one application. is there any way to pass fields/columns as map/list to createQuery ?
I use Spring data by extending SimpleJpaRepository
That is at least weird, if not wrong. you'd normally extend on or multiple of Spring Data interfaces.
Anyway, yes this is possible like so:
Is there a way to achieve this?
Yes, there is.
Version 2.6 RC1 of Spring Data JPA introduced fluent APIs for Query By Example, Specifications, and Querydsl.
This you can use among other things to configure projections. Note that only interface projections are supported.
You can use projections like this:
interface SomeRepository extends CrudRepository, JpaSpecificationExecutor{}
MyService {
#Autowired
SomeRepository repository;
void doSomething(){
List<User> users = repository.findBy(
someSpecification,
q -> q.project("firstname", "roles").all()
);
// ...
}
}
It will return an entity, but only the fields given in the project clause will be filled.
Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)
Below code gives error and it says School class must implement DBObject interface. The problem is that this interface has tons of methods. I have nearly 100 class and I don't want to write millions of methods. Is there any easy way to save an object?
DBCollection table = db.getCollection("school");
School document = new School();
table.insert(document);
Instead of implementing DBObject or extending one of the existing implementations like BasicDBObject, you could have all objects which can be saved in the database have a method public DBObject toDBObject() which creates and returns a DBObject representation of the object. The BasicDBObject is a Map<String, Object> which handles the object data as key/value pairs, so it is a good candidate for this.
For a more generic solution, you could use reflection to create a method which can convert any Java object into a DBObject. To have more control over this, you could make up some annotations, add them to your classes and have your conversion method check them.
Now you have created your own object mapping framework for MongoDB. But why reinvent the wheel when others have already done it? So before you do this, check out if the existing mapping frameworks like morphia fulfill your use-case - they likely do and will save you hours of programming and weeks of debugging.
[opinion]
I usually despise object-relational mappers in the context of relational databases because of the impedance mismatch problem, but for heterogeneous databases like MongoDB they make a lot more sense, because you can store objects which have the same base-class but also some different class-specific fields in the same table collection without any ugly workarounds.
[/opinion]
I've yet to use Morphia, but I'm considering it for a current project.
Suppose I have a POJO with a number of #Reference annotations and I ask Morphia to fetch the object graph from the database. If I then make another DAO or DataStore call and ask Morphia to fetch some object that was already instantiated in the first graph, would Morphia return a reference to the already instantiated object or would it create a new instance?
If Morphia returns a new instance of the object each time, does anyone have a recommendation of how to best approach creating a Morphia-backed repository that won't duplicate already-instantiated objects?
As I see it in Morphia, it will re read every reference.
This is one of the problems, why I created Morphium. I integrated a caching layer there, so if you read a reference, this one won't be read again (at least, if you search by ID...)
We use morphia in production and there are two ways to make sure you don't load the references which is something we came across too.
One is to use the lazy loading option when you define the #Reference element in your main class. This of course means that this behavior is 'global' to that object.
The better way to do this is to not define an #Reference using Morphia and instead managing the references yourself. Let me know if you need a code sample.
I've stopped using #Reference too and instead declare something like:
ObjectId itemId
rather than having a field item. This has 2 benefits: (1) it lets me define a getter through a helper getObject(...) method which I have written with object caching and (2) it stores a simple ObjectId in the Mongo object rather than a full DBRef which includes the collection name and thus about twice the data size.