We have lots of repositories defined via the interface extends JpaRepository pattern. When running integration tests or certain entry points to our application, we only need a very small subset of those repositories.
Can we lazily load the actual repository implimentations?
Something equivalent to #Lazy on a #Bean? Note: I did at least attempt the naive solution of annotating the repository interface with #Lazy to no avail.
Even if it's a very old Question, I think some may still want to know about the use of #Lazy on Spring Data repositories :
it is actually supported since v1.5.0
Lazyloading will avoid to get all other dependency based on your main table or request. In your case you should set a limit / offset for that kind of operation.
Related
Try as I might, I cannot seem to find a simple example of a SpringBoot application that uses Spring Data JDBC with a Postgres database, or how to generate Entity classes from a database, or vice versa if that's required, or even how to get a reference to a Data Source.
There are lots of examples using JPA.
There are a few examples spinning up an H2/HSQL on the fly.
There are a couple using Postgres with Spring but not Spring Boot, which means these examples have a number of extra steps.
I believe I know what dependencies are needed -- basically Postgres and a Spring Data JDBC starter, both available in start.spring.io - and as far as data source properties, the example in this this link seems like it might work ...
spring.datasource.url=jdbc:postgresql://localhost:5432/shopme
spring.datasource.username=postgres
spring.datasource.password=password
But I cannot find how to declare a Repository class, or how to instantiate or get a reference to said Repository. If the docs are meant to explain this, I am afraid their virtues are lost on me. From the examples they link to, it looks like perhaps I can create a repository like this ...
interface CategoryRepository extends CrudRepository<Category, Long>, WithInsert<Category> {}
... and then get a reference to an implementation of my repository like this ...
#Autowired CategoryRepository repository;
... and I guess that will use my Postgres info from application.properties somehow.
None of that addresses Table schema => POJO generation (or vice versa). But even if I'm right about the above, this is my persistence layer. I'm not comfortable copy/pasting from some sample code, getting a good result (for now), and declaring it done. I'd rather be working from real information.
If I'm starting with valid Postgres connection info and I know what I want my Entities to look like ...
How do I capture my Postgres connection info in properties? (I suspect my properties example above is correct but that's just copy/paste from some link)
How do I write tables and then generate Entity classes, or the reverse? I prefer the former but I'll settle for either at this point.
How do I get a reference to a DataSource to my Postgres database? (I might never need one but I'd like to know how in case I do)
How do I define a repository class? (I believe I extend CrudRepository<AggRoot, IdType> if I'm happy with CrudRepo, but I'm hazy on this)
How do I instantiate my repo class with my postgres info / DataSource?
How do I get a reference to this repo?
I'm sure a lot of this would be easier if I was stronger with basic Spring, but I am learning that as I go.
Thanks so much!
Bean
I have pieced together some working code from various source and copy/pastes. It does not feel like an answer, so much as it feels like code that happens to work, for now, and I'm open to any suggestions, corrections, or improvements.
How do I capture my Postgres connection info in properties?
This appears to be covered in the Spring Boot docs, not Spring Data etc. There's quite a gotcha around property names that's easy to overlook, which has to do with a Spring Data default connection pool change (Tomcat to Hikari), which requires a subtle property name change: x.y.url= changes to x.y.jdbc-url=. So my properties look like this:
app.datasource.jdbc-url=jdbc:postgresql://localhost:5432/mydb
app.datasource.username=admin
app.datasource.password=admin
How do I write tables and then generate Entity classes, or the reverse? I prefer the former but I'll settle for either at this point.
From what I can tell, in Spring Data JDBC you cannot do either. All I am going off of is something I read in a blog post from 2019 ... I'd love something definitive one way or the other.
How do I get a reference to a DataSource to my Postgres database? (I might never need one but I'd like to know how in case I do)
Using the subtly-documented DataSourceBuilder seems to be the way to go. Even more subtly documented is the need to annotate your DataSourceBuilder with the prefix you're using for your connection string application properties. Easiest is to declare the DataSourceBuilder method in your Application class. The good news is the declaration itself is very simple.
#Bean
#ConfigurationProperties(prefix = "app.datasource")
public DataSource dataSource ()
{
return DataSourceBuilder.create().build();
}
How do I define a repository class? (I believe I extend CrudRepository<AggRoot, IdType> if I'm happy with CrudRepo, but I'm hazy on this)
Indeed, CrudRepository is the way to go. A lot of the examples I found were misleadingly complex: they add annotations because they are doing non-default stuff, but if you just want CRUD, this is all you need:
#Repository // I'm unsure if this is needed - some examples had it, some didn't
public interface MyAggRootRepository extends CrudRepository<MyAggRoot, Long>
{
}
How do I instantiate my repo class with my postgres info / DataSource?
How do I get a reference to this repo?
With a properly coded DataSourceBuilder as above, all you need is to declare an #Autowired field for your repo and you're done.
#Autowired
MyAggRootRepository _repo
That appears to be everything. Once you know the steps there's not much to it:
a few lines in application.properties
a pretty trivial interface extending CrudRepository(T, PK)
a boilerplate DataSource-returning method using DataSourceBuilder (possibly with care taken to get the prefix right on the properties)
a simple #Autowired repository field
The lack of table or Entity class generation means a bit more work, but it's one less thing to know, and one less source of surprises I have to deal with so I don't mind the tradeoff.
If anyone can correct the above, or point to a definitive reference rather than hazy memory of blog posts, feel free.
I just uploaded a basic example of using Spring Data JPA here on my Github (Sorry, that's a lot of line on the application.properties, just ignore if unnecessary)
When you using spring-boot-starter-data-jpa dependency. It will setup anything related to database for you. You don't need to put any boilerplate code.
You can use annotation #Repository or not, it depends on your code structure and requirement. For me, I always use annotation.
If you're using eclipse, you can use a 'Generate Entities from Tables' wizard. Specify a connector or driver, fill out database creadential and you are ready to go.
When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks
No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!
Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)
I have read almost all articles about Repository pattern and different implementations of it. Many of them judged bad practices (ex: using IQueryable<T> instead of IList<T>) etc. that why i'm still stuck and couldn't end-up to the right one.
So:
Do I need Repository pattern to apply IoC in my MVVM applications ?
If yes, What is the efficient IRepository implementation to EF Entities which is a good practice and better testable ?
How can I test my Repositories and UnitofWork behavior ? Unit tests against in memory Repositories ? Integration tests ?
Edit : According to answers I added the first question.
Ayende Rahien has a lot of posts about repository http://ayende.com/blog/search?q=repository and why it is bad when you are using ORM's. I thought Repository was the way to go. Maybe it was, in 2004. Not now. ORM's already implement repository. In case of EF it is IDbSet. And DbContext is UnitOfWork. Creating a repository to wrap EF or other ORM's adds a lot of unnecessary complexity.
Do integration testing of code that will interact with database.
The repository pattern adds an extra layer when you are using EF, so you should make sure that you need this extra level of indirection.
The original idea of the Repository pattern is to protect the layers above from the complexity of and knowing about the structure of the database. In many ways EF's ORM model protects the code from the physical implementation in the database so the need for a repository is less.
There are 2 basic alternatives:
Let the business layer talk directly to the EF model
Build a datalayer that implements the Repository pattern, this layers maps EF objects to POCO
For tests:
When we use EF directly we use a transaction scope with rollback, so that the tests do not change the data.
When we use the repository pattern we use Rhino Mocks to mock the repository
We use both approaches, Repository pattern gives a more clearly layered app and therefore maybe more control, using EF directly gives an app with less code and therefore faster to build.
I've faced some troubles with context in EF in ASP.MVC2.
I thought that best way to improve some operation on DataBase i've created Repository. My repo class adds, deletes, select many items so i don't need to write
(using <name>Context = new (... etc ...) ) { ... }
Repository eliminates initializing context for every operation, but don't dispose the context.
What is the best way to manage contexts? If i create other repository class and try to do any operation which will need objects from both contexts there is a problem.
Is there any other way or better way to implement repository, to manage contexts? Any interesting pattern?
A context is a unit of work, so you want one per web request.
Therefore, you should use constructor injection (i.e., a constructor argument) to supply a single context for all repositories, and dispose it at the end of the request.
Most DI frameworks will do this automatically.
Here is a nice post regarding the repository pattern on top of EF:
http://blogs.microsoft.co.il/blogs/gilf/archive/2010/01/20/using-repository-pattern-with-entity-framework.aspx
You might also check out posts regarding the Unit of Work pattern implementation:
http://blogs.microsoft.co.il/blogs/gilf/archive/2010/02/05/using-unit-of-work-pattern-with-entity-framework.aspx
http://devtalk.dk/2009/06/09/Entity+Framework+40+Beta+1+POCO+ObjectSet+Repository+And+UnitOfWork.aspx
Take a look at these blog posts as well:
http://initializecomponent.blogspot.com/2010/09/repository-like-facade-for-mocking.html
http://initializecomponent.blogspot.com/2010/09/repository-like-facade-for-mocking_12.html