Mark a field as transient for one datasource but not another in spring data jpa - spring-data-jpa

I've got an app that sends data to SQL Server, and we'd like to expand it to also write to another data source (possibly amazon s3, but possibly a regular database). The issue is, this new database only needs a subset of the fields in my entity class.
Is there a way that I can mark a field as being transient for one datasource but not another? Or should I be doing something on the Repository level? I'm using Spring Data JPA, and had been using a Spring-generated JpaRepository.
public interface JobRepository extends JpaRepository<MyPojo, Long>{}

It is possible to create two different repository interfaces for two different data sources. In this case, you will need to create two different entities-one for each data source and bind them in your services.
For Data Source A: AEntity, ARepository
For Data Source B: BEntity, BRepository
And in your services, you create a method:
public AEntity createAEntityFromBEntity(BEntity bEntity);
To be able to do this, you will need to mark one of your data sources as #Primary. Please check this link to see how to create two different data source connections with configuration details.

Related

Can I use Spring Data JPA #Entity and Spring Data GemFire #Region together on the same POJO?

When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks
No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!

Is it good practice to use AccessBean or SQL to fetch data from OOTB table in IBM WCS

I want to get data from multiple OOTB WCS table for which there is no OOTB rest available. I am using multiple access bean in databean to get data from tables. Is this a good practice or we should use ServerJDBCHelperAccessBean make a single query with join to hit database. I understand that AccessBean are cached but there are techniques we can cache sql also.
Is there any other reason we should use AccessBean instead of ServerJDBCHelperAccessBean in case fetching data from multiple tables. or we should use ServerJDBCHelperAccessBean and get data in single sql query with joins.
And which will be more expensive in above approaches.
Thanks
Ankit
There is no hard and fast rule to choose between the above two methods for database interactions. Developer has to make a logical choice
AccessBeans
Caching is one of the advantage of access beans. That is a good performance improvement and is achieved by caching the home objects as the look up for home objects are costly. Another point in favour of access bean is handling optimistic updates. Your case is to get the data (not to update/insert) and hence you are safe here.
Session Bean
Like access bean , session beans are another way of reading data from DB when you want to get data from multiple tables. A session bean must implement BASEJDBCHelper class.
public class TestSessionBean extends
com.ibm.commerce.base.helpers.BaseJDBCHelper
implements SessionBean{
public Object fetchResults() throws
javax.naming.NamingException, SQLException
{
try {
// get a connection from the WebSphere Commerce data source
makeConnection();
PreparedStatement prepStatement = getPreparedStatement( "sql to execute");
ResultSet rs = executeQuery(prepStatement, false);
}
finally {
closeConnection();
}
}
}
Using ServerJDBCHelperAccessBean
This is used when you have to make a db transaction outside of EJBs. Keep in mind that it is highly recommended to use EJBs for update/delete for keeping the overall integrity.
In your case, as far as I understand it is a select involving multiple tables and you are not keen on the data to be really in sync (like you are OK to lose a data which was updated nano seconds back or so). Hence you can go ahead with second or third approach
A good reference :
http://deepakpadmakumar.blogspot.com.au/2012/05/session-beans-and-entity-beans-in-wcs.html

Creating Datasource Dynamically in SpringBoot OpenJPA Application (Implementing Multitenancy with OpenJPA)

I am creating a Springboot application with OpenJPA.
My requirement is that I need to connect to multiple datasources dynamically and the datasource credentials are obtained at runtime by calling some rest-endpoints.
Here is the controller class:
#RestController
public class StationController {
#Autowired
BasicDataSource dataSource;
I have a service which returns me the jdbc_url depending on the customer name:
public String getDSInfo(String customername){
// code to get the datasource info (JDBC URL)
}
My questions are:
Is there a way in which I can create datasources at runtime by getting datasource credentials by calling some other service (which takes the customer id and returns the customer specific datasource) ?
Since my application is a web based application, many customers will be accessing it at the same time, so how to create and handle so many different datasources?
NOTE:
The code will get information about the customer specific data source only by firing some service at the runtime, so I cannot hardcode the datasource credentials in the XML configuration file.
I found some implementations with Hibernate but i am using Springboot with OpenJPA. So need OpenJPA specific help.
It sounds like you want a multi-tenancy solution.
Datasources are easy to create programmatically, just use a DataSourceBuilder with your connection details pulled in from a central source (e.g. a central config database or Spring Config Server).
Then you'll need to look at a multi-tenancy framework to tie the datasources back to clients.
See here:
https://www.youtube.com/watch?v=nBSHiUTHjWA
and here
https://dzone.com/articles/multi-tenancy-using-jpa-spring-and-hibernate-part
The video is a long watch but a good one. Basically you have a map of customer data sources (from memory) that allow an entityManager to pick up a datasource from the map by using a thread scoped custom spring scope of "customer" which is set when a user for a particular customer somehow logs into your app.

Trying to use Crate.io NoSql database with an existing Spring Data / Mysql project

I'm attempting to add Crate.IO capability to an existing Spring Data/Eclipselink/MySql web application. For this specific use case, we want to persist data to both MySql AND Crate (for evaluation purposes) in the most painless way possible. I'm using the Spring-Data-Crate project in order to be able to use Spring Data Repositories with Crate.
I've been able to setup a separate Crate specific entity manager with a filter to only utilize repos that implement CrateRepository. The problem I'm having is determining how to use the existing Spring Data/MySql entity classes with Crate. (or derive from them)
1) If I annotate an existing Spring Data #Entity class with the Spring-Data-Crate
#Table annotation, the mapping to the crate DB will fail because EclipseLink/JPA adds hidden persistence fields to entities objects that start with an underscore, which is apparently not allowed by the spring-data-crate adapter
2) I tried to use entity inheritance, with a base class that both the MySql and Crate entity can extend, with only the MySql entity having the spring data #Entity annotation. Unfortunately, this causes Spring Data to lose visibility of the base class fields unless the base class is annotated with #MappedSuperClass. But adding this annotation introduces the hidden "_"-prefixed persistence properties to the derived crate entity.
3) I could use separate entities entirely and have them implement a common interface, but I can't assign the interface as the type of the spring data crate repository.
... Not sure where to go from here
Spring Data Crate adapter project - https://github.com/KPTechnologyLab/spring-data-crate
Spring Data Crate Tutorial - https://crate.io/a/using-sprint-data-crate-with-your-java-rest-application/
i'm johannes from crate.
we didn't test the use of spring data crate in that manner so we can't state any information if this should or shouldn't work.
sorry, johannes

JPA lazy loading strategies for remote client using EclipseLink

1. Question
What are known strategyies, solutions for LAZY-loading from client side?
I was checking out this stuff http://wiki.eclipse.org/Introduction_to_EclipseLink_Sessions_(ELUG)#Remote_Sessions but not sure if this is a solution to my problem or how to use this.
2. Use-case
I'm developing a three tier application where my presentation layer (Eclipse RCP) is a remote client over network:
[ Eclipse RCP ] <-----(RMI)-----> [ [EJB 3] [JPA 2] [Mysql] ]
Now, I use JPA Entities as my domain model which I want to use in my client as well. I get the entites from #Session beans over network.
Problem happens when my JPA Entites have LAZY fields. After serialization and especially because my JPA provider (EclipseLink) is over the other side of the network I need a strategy to load those LAZY fields from client.
I'm going to have many entities: 30-40 maybe. And typical scenario would be when the user sees a list of SomethingModel which has many List fields, but those are not needed to be shown on the list, only when she want's to change a specific element.
3. Possible Solution
I came up with one solution e.g.: I create proxy classes for my JPA entites in client side. When I need a collection field from my Domain model, the proxy class will call the remote EJB to populate the field.
class CarModelClient {
CarModel model;
public String getColor(){
model.getColor();
}
public List<Wheels> getWheels(){
CarModelFacadeRemote carFacade = //get my remote ejb
model.setWheels( carFacade.getWheels( model.getId() ) );
return model.getWheels();
}
}
Well, similar to that.
Thank you for your answers.
Don't try to be too smart and pretend like the client was on the server, and the entity manager was always open. Consider the domain objects, at client-side, exactly as you would consider DTOs or JSON objects: objects containing some information coming from the server and seralized over the wire.
Document the service methods called from the client (the facade methods) to specify which associations are initialized and which are not in the returned entities. If you are on some "list" screen and want to see the detailed view of one of the elements of the list, for example, call another service which loads the entity again from the database (and thus gets fresh results), with probably other associations initialized in order to display more details about the entity.
Trying to dynamically initialize the lazy-loaded associations at the client just doesn't work: it's complex, inefficient, results in an obsolete and incoherent graph on the client, doesn't take transactional isolation into account.