Try as I might, I cannot seem to find a simple example of a SpringBoot application that uses Spring Data JDBC with a Postgres database, or how to generate Entity classes from a database, or vice versa if that's required, or even how to get a reference to a Data Source.
There are lots of examples using JPA.
There are a few examples spinning up an H2/HSQL on the fly.
There are a couple using Postgres with Spring but not Spring Boot, which means these examples have a number of extra steps.
I believe I know what dependencies are needed -- basically Postgres and a Spring Data JDBC starter, both available in start.spring.io - and as far as data source properties, the example in this this link seems like it might work ...
spring.datasource.url=jdbc:postgresql://localhost:5432/shopme
spring.datasource.username=postgres
spring.datasource.password=password
But I cannot find how to declare a Repository class, or how to instantiate or get a reference to said Repository. If the docs are meant to explain this, I am afraid their virtues are lost on me. From the examples they link to, it looks like perhaps I can create a repository like this ...
interface CategoryRepository extends CrudRepository<Category, Long>, WithInsert<Category> {}
... and then get a reference to an implementation of my repository like this ...
#Autowired CategoryRepository repository;
... and I guess that will use my Postgres info from application.properties somehow.
None of that addresses Table schema => POJO generation (or vice versa). But even if I'm right about the above, this is my persistence layer. I'm not comfortable copy/pasting from some sample code, getting a good result (for now), and declaring it done. I'd rather be working from real information.
If I'm starting with valid Postgres connection info and I know what I want my Entities to look like ...
How do I capture my Postgres connection info in properties? (I suspect my properties example above is correct but that's just copy/paste from some link)
How do I write tables and then generate Entity classes, or the reverse? I prefer the former but I'll settle for either at this point.
How do I get a reference to a DataSource to my Postgres database? (I might never need one but I'd like to know how in case I do)
How do I define a repository class? (I believe I extend CrudRepository<AggRoot, IdType> if I'm happy with CrudRepo, but I'm hazy on this)
How do I instantiate my repo class with my postgres info / DataSource?
How do I get a reference to this repo?
I'm sure a lot of this would be easier if I was stronger with basic Spring, but I am learning that as I go.
Thanks so much!
Bean
I have pieced together some working code from various source and copy/pastes. It does not feel like an answer, so much as it feels like code that happens to work, for now, and I'm open to any suggestions, corrections, or improvements.
How do I capture my Postgres connection info in properties?
This appears to be covered in the Spring Boot docs, not Spring Data etc. There's quite a gotcha around property names that's easy to overlook, which has to do with a Spring Data default connection pool change (Tomcat to Hikari), which requires a subtle property name change: x.y.url= changes to x.y.jdbc-url=. So my properties look like this:
app.datasource.jdbc-url=jdbc:postgresql://localhost:5432/mydb
app.datasource.username=admin
app.datasource.password=admin
How do I write tables and then generate Entity classes, or the reverse? I prefer the former but I'll settle for either at this point.
From what I can tell, in Spring Data JDBC you cannot do either. All I am going off of is something I read in a blog post from 2019 ... I'd love something definitive one way or the other.
How do I get a reference to a DataSource to my Postgres database? (I might never need one but I'd like to know how in case I do)
Using the subtly-documented DataSourceBuilder seems to be the way to go. Even more subtly documented is the need to annotate your DataSourceBuilder with the prefix you're using for your connection string application properties. Easiest is to declare the DataSourceBuilder method in your Application class. The good news is the declaration itself is very simple.
#Bean
#ConfigurationProperties(prefix = "app.datasource")
public DataSource dataSource ()
{
return DataSourceBuilder.create().build();
}
How do I define a repository class? (I believe I extend CrudRepository<AggRoot, IdType> if I'm happy with CrudRepo, but I'm hazy on this)
Indeed, CrudRepository is the way to go. A lot of the examples I found were misleadingly complex: they add annotations because they are doing non-default stuff, but if you just want CRUD, this is all you need:
#Repository // I'm unsure if this is needed - some examples had it, some didn't
public interface MyAggRootRepository extends CrudRepository<MyAggRoot, Long>
{
}
How do I instantiate my repo class with my postgres info / DataSource?
How do I get a reference to this repo?
With a properly coded DataSourceBuilder as above, all you need is to declare an #Autowired field for your repo and you're done.
#Autowired
MyAggRootRepository _repo
That appears to be everything. Once you know the steps there's not much to it:
a few lines in application.properties
a pretty trivial interface extending CrudRepository(T, PK)
a boilerplate DataSource-returning method using DataSourceBuilder (possibly with care taken to get the prefix right on the properties)
a simple #Autowired repository field
The lack of table or Entity class generation means a bit more work, but it's one less thing to know, and one less source of surprises I have to deal with so I don't mind the tradeoff.
If anyone can correct the above, or point to a definitive reference rather than hazy memory of blog posts, feel free.
I just uploaded a basic example of using Spring Data JPA here on my Github (Sorry, that's a lot of line on the application.properties, just ignore if unnecessary)
When you using spring-boot-starter-data-jpa dependency. It will setup anything related to database for you. You don't need to put any boilerplate code.
You can use annotation #Repository or not, it depends on your code structure and requirement. For me, I always use annotation.
If you're using eclipse, you can use a 'Generate Entities from Tables' wizard. Specify a connector or driver, fill out database creadential and you are ready to go.
Related
I have a scenario in which I'm looking to register IntegrationFlow Spring beans based on the contents of a JPA database table.
For example, the table will look like:
#Entity
class IntegrationFlowConfig {
private long id;
private String local;
private String remote;
}
and I want an IntegrationFlow registered as a Spring bean for each entry found in the above table definition. When a row is added, a new bean is registered, and when a row is deleted, the corresponding bean is destroyed.
I've considered creating an EntityListener for the above entity, in which #PostPersist and #PostRemove will create/destroy IntegrationFlow beans via IntegrationFlowContext, but this solution seemed a bit clunky, and I was wondering if there was any functionality that exists that's a bit more streamlined to solve the above problem. Perhaps some sort of row mapping functionality that can map spring beans to JPA database rows, etc?
Any help would be much appreciated!
thanks,
Monk
Well, such an idea has crossed my mind several times in the past. But even with an XML configuration easily serialized into databased and deserialized into an Integration Flow (XML one) in its own child ApplicationContext, we still ended up with the problem that some beans have to be provided with their Java code. Even if we opted out for Groovy scripts, which could be parsed and loaded at runtime, some Java code would need to be compiled anyway. And in the end when we released some solution for the customer, it became very messy error prone how their operators wrote those dynamic flows.
You definitely can have some external configuration options and can have a conditional logic, but still the code must be compiled in advance without any way to let the logic (not data) to be seriailzed and deserialized at runtime.
Probably this is not an answer you are looking for, but there is no such a serialization solution and possibly it won't be done at all, since it is an anti-pattern (IMHO) to have dynamic application these days when we simply can deal with short and simple microservices or even functions.
Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)
I'm having issues trying to get MyBatis and Javers (with Spring) integrated and working. I've followed instructions at http://javers.org/documentation/spring-integration/ and gotten the Aspect setup, and annotated my entity class and registered it with Javers, and the MyBatis interface correctly annotated with #Repository and #JaversAuditable on the appropriate methods, but still haven't gotten it to work, even setting breakpoints in the Javers Aspect, but nothing triggers.
I've also gone about it the other way, using MyBatis plugin interceptor, as per http://www.mybatis.org/mybatis-3/configuration.html#plugins (then used http://www.mybatis.org/spring/xref-test/org/mybatis/spring/ExecutorInterceptor.html as a basic example for commits). However while it's triggering, it's not doing what I expected and is basically just an aspect around on the commit method, which takes a boolean rather than containing which entity(ies) are being commited which would let me pass them to Javers. I suppose I could add an interceptor on the update/insert MyBatis methods, and then stored that in a ThreadLocal or similar so that when commit/rollback was called I could pass it to Javers as necessary, but that's messy.
I've got no clue where to go from here, unless someone can see something I've missed with one of those 2 methods.
So in my confusion, I realized that since MyBatis generates the concrete object for the Mapper Interfaces, Spring never seems the creation of that object, simply has the final object registered as a Bean in the context. Thus, Javers never has a chance to process the Bean as it's created in order to do any proxying or what not as necessary.
So, silly me. So I ended up creating a Spring-Data #Repository layer that mostly just passes the call through to the Mapper. Although on updates I'm doing some extra bits which the DAO shim layer (as I'm calling it) works well for.
I have a Spring-based system that uses Hibernate Search 3.4 (on top of Hibernate 3.5.4). Integration tests are managed by Spring, with #Transactional annotation. At the moment test data (entities that are to be indexed) is loaded by Liquibase script, we use it's Spring integration. It's very inconvenient to manage.
My new solution is to have test data defined as Spring beans and wire them as Resources, by name. This part works.
I tried to have these beans persisted and indexed in setUp method of my test cases (and in test methods themselves) but I failed. They get into DB fine but I can't get them indexed. I tried calling index() on FullTextEntityManager (with flushToIndexes), I tried createIndexer().startAndWait().
What else can I do?
Or may be there is some better option of testing HS?
Thank You in advance
My new solution is to have test data defined as Spring beans and wire
them as Resources, by name. This part works.
sounds like a strange setup for a unit test. To be honest I am not quote sure how you do this.
In Search itself an in memory database (H2) is used together with a Lucene RAM directory. The benefits of such a setup is that it is fast and easy to avoid dependencies between tests.
I tried to have these beans persisted and indexed in setUp method of
my test cases (and in test methods themselves) but I failed. They get
into DB fine but I can't get them indexed.
If automatic indexing is enabled and the persisting of the test data is occurring within an transaction, it should work. A common mistake in combination with Spring is to use the wrong transaction manager. The Hibernate Search forum has a lot of threads around this, for example this one - https://forum.hibernate.org/viewtopic.php?f=9&t=998155. Since you are not giving any concrete configuration and code examples it is hard to give more specific advice.
I tried createIndexer().startAndWait()
that is also a good approach. I would recommend this approach if you want to insert not such a couple of test entities, but a whole set of data. In this case it can make sense to use a framework like dbunit to insert the testdata and then manually index the data. createIndexer().startAndWait() is the right tool for that. Extracting all this loading/persisting/indexing functionality into a common test base class is the way to go. The base class can also be responsible to do all the Spring bootstrapping.
Again, to give more specific feedback you have to refine your question.
I have a complete different approach, when I write any queries, i want to write a complete test suite, but data creation has always been pain(special mention to when test customer gets corrupt and all your test suite breaks.
To solve this I created Random-JPA. It's simple and easy to integrate. The whole idea is you create fresh data and test.
You Can find the full documentation here
I have classes for entities like Customer, InternalCustomer, ExternalCustomer (with the appropriate inheritance) generated from an xml schema. I would like to use JPA (suggest specific implementation in your answer if relevant) to persist objects from these classes but I can't annotate them since they are generated and when I change the schema and regenerate, the annotations will be wiped. Can this be done without using annotations or even a persistence.xml file?
Also is there a tool in which I can provide the classes (or schema) as input and have it give me the SQL statements to create the DB (or even create it for me?). It would seem like that since I have a schema all the information it needs about creating the DB should be in there. I am not talking about creating indexes, or any tuning of the db but just creating the right tables etc.
thanks in advance
You can certainly use JDO in such a situation, dynamically generating the classes, the metadata, any byte-code enhancement, and then runtime persistence, making use of the class loader where your classes have been generated in and enhanced. As per
http://www.jpox.org/servlet/wiki/pages/viewpage.action?pageId=6619188
JPA doesn't have such a metadata API unfortunately.
--Andy (DataNucleus)