Connecting to multiple database and selecting tables at runtime spring boot java - postgresql

I've a usecase where I need to connect to two different databases(Postgres and Oracle). Postgres is already configured with jpa. I need to add one more databases(Oracle). In the oracle database i need to choose tables at runtime for insertion and deletion(since tables are not fixed). Currently im passing the tables in my properties file as a list
oracle:
deletion:
table:
-
tableName: user
primaryKey: userId
emailField: emailId
deleteTableName: user_delete
-
tableName: customer
primaryKey: customerId
emailField: emailAddress
deleteTableName: customer_delete
I've created a bean that reads all these properties and puts them in a list
#Bean("oracleTables")
#ConfigurationProperties("oracle.deletion.table")
List<Table> getAllTAbles(){
return new ArrayList<>();
}
I have a list of emailAddresses with me. For each of these tables i need to fetch primary key based on emailAddress from parent table(value in tableName) and insert data into corresponding deleteTable(value in deleteTableName). Once that is done i need to delete data from the actual table(value in tableName) based on email address.
I'm planning to loop through the list of tables I have in my bean and perform fetch, insert and delete.
sample snippet
#Autowired
#Qualifier("oracleTables")
List<Table> tables;
public boolean processDelete(List<String> emails){
for(Table table:tables){
//fetch all the primary keys for given emails from main table(value in tableName)
//insert into corresponding delete table
//delete from main table
}
}
But the question i have is , should i go with jdbcTemplate or jpaRepository/hibernate. And some help with implementation as well with a small sample/link.
The reason for this question is
1)Tables in my case are not fixed
2)I need transaction management to rollback in case of failure in either fetching or inserting or deletion.
3)I need to configure two databases

should i go with jdbcTemplate or jpaRepository/hibernate
Most definitely JdbcTemplate. JPA does not easily allow dynamic tables.
I need transaction management to rollback in case of failure in either fetching or inserting or deletion
If you need transactions, you'll also need to define two separate transaction managers:
#Bean
public TransactionManager oracleTransactionManager() {
var result = new DataSourceTransactionManager();
...
result.setDataSource(oracleDataSource());
return result;
}
#Bean
public TransactionManager postgresTransactionManager() {
...
}
Then, if you want declarative transactions, you need to specify the manager with which to run a given method:
#Transactional(transactionManager = "oracleTransactionManager")
public void doWorkInOracleDb() {
...
}
I need to configure two databases
Just configure two separate DataSource beans. Of course, you will actually need two separate JdbcTemplate beans as well.

Related

Delete Records from multiple tables in Spring Batch

I have a SQL script which performs delete operation from multiple tables based on say employee ids:
DELETE FROM EMP_ADDRESS where EMP_ID in (EMP_IDS);
DELETE FROM EMP_DETAILS where EMP_ID in (EMP_IDS);
DELETE FROM EMPLOYEE where EMP_ID in (EMP_IDS);
Is there a way to call the sql script from Spring batch by passing the employee ids? I tried an alternate approach where in the writer i get the ids and delete from the tables as below:
public class DeleteEmployeeData implements ItemWriter<EmployeeData>{
#Autowired
private JdbcTemplate jdbcTemplate;
#Override
public void write(List<? extends EmployeeData> items) throws Exception {
for(EmployeeData item : items){
jdbcTemplate.update(SQLConstants.DELETE_EMP_ADDRESS,item.getEmployeeId());
jdbcTemplate.update(SQLConstants.DELETE_EMP_DETAILS,item.getEmployeeId());
jdbcTemplate.update(SQLConstants.DELETE_EMPLOYEES,item.getEmployeeId());
}
}
}
This works. But i wanted to know if there is a better approach than this?
Your current approach with a chunk oriented step looks good to me:
The reader reads IDs
A processor filters IDs
And a composite writer with two writers: one to write xml and another one to delete items

How id can be found in Transaction-Scoped Persistence context if it's not in the database

An example from Pro JPA:
#Stateless
public class AuditServiceBean implements AuditService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
public void logTransaction(int empId, String action) {
// verify employee number is valid
if (em.find(Employee.class, empId) == null) {
throw new IllegalArgumentException("Unknown employee id");
}
LogRecord lr = new LogRecord(empId, action);
em.persist(lr);
}
}
#Stateless
public class EmployeeServiceBean implements EmployeeService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
#EJB
AuditService audit;
public void createEmployee(Employee emp) {
em.persist(emp);
audit.logTransaction(emp.getId(), "created employee");
}
// ...
}
And the text:
Even though the newly created Employee is not yet in the database, the
audit bean can find the entity and verify that it exists. This works
because the two beans are actually sharing the same persistence
context.
As far as I understand Id is generated by the database. So how can emp.getId() be passed into audit.logTransaction() if the transaction has not been committed yet and id has not been not generated yet?
it depends on the strategy of GeneratedValue. if you use something like Sequence or Table strategy. usually, persistence provider assign the id to the entities( it has some reserved id based on allocation size) immediately after calling persist method.
but if you use IDENTITY strategy id different provider may act different. for example in hibernate, if you use Identity strategy, it performs the insert statement immediately and fill the id field of entity.
https://thoughts-on-java.org/jpa-generate-primary-keys/ says:
Hibernate requires a primary key value for each managed entity and
therefore has to perform the insert statement immediately.
but in eclipselink, if you use IDENTITY strategy, id will be assigned after flushing. so if you set flush mode to auto(or call flush method) you will have id after persist.
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Entities/Ids/GeneratedValue says:
There is a difference between using IDENTITY and other id generation
strategies: the identifier will not be accessible until after the
insert has occurred – it is the action of inserting that caused the
identifier generation. Due to the fact that insertion of entities is
most often deferred until the commit time, the identifier would not be
available until after the transaction has been flushed or committed.
in implementation UnitOfWorkChangeSet has a collection for new entities which will have no real identity until inserted.
// This collection holds the new objects which will have no real identity until inserted.
protected Map<Class, Map<ObjectChangeSet, ObjectChangeSet>> newObjectChangeSets;
JPA - Returning an auto generated id after persist() is a question that is related to eclipselink.
there are good points at https://forum.hibernate.org/viewtopic.php?p=2384011#p2384011
I am basically referring to some remarks in Java Persistence with
Hibernate. Hibernate's API guarantees that after a call to save() the
entity has an assigned database identifier. Depending on the id
generator type this means that Hibernate might have to issue an INSERT
statement before flush() or commit() is called. This can cause
problems at rollback time. There is a discussion about this on page
490 of Java Persistence with Hibernate.
In JPA persist() does not return a database identifier. For that
reason one could imagine that an implementation holds back the
generation of the identifier until flush or commit time.
Your approach might work fine for now, but you could run into troubles
when changing the id generator or JPA implementation (switching from
Hibernate to something else).
Maybe this is no issue for you, but I just thought I bring it up.

jpa repository save method returns different id from the one inserted into database

I'm using spring data (jpaRepository) + Oracle 11g Database.
Here's the code of my JUnit test:
#Test
public void testAjoutUtilisateur() {
Utilisateur utilisateur = new Utilisateur();
(...)
utilisateur=repository.save(utilisateur);
Utilisateur dbutilisateur = repository.findOne(utilisateur.getIdutilisateur());
assertNotNull(dbutilisateur);
When I debug I find that "utilisateur" object returned by repository.save method has an id like "2100" while the corresponding inserted line in the database have an id like "43".
I have an Oracle database with a sequence and a trigger to have the auto incremented property for the id for my "Utilisateur" table.
Here is the id definition in my Utilisateur entity:
#Entity
#NamedQuery(name="Utilisateur.findAll", query="SELECT u FROM Utilisateur u")
#SequenceGenerator(sequenceName="ID_UTILISATEUR_SEQ", name="ID_UTILISATEUR_SEQ")
public class Utilisateur implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE, generator="ID_UTILISATEUR_SEQ")
private Long idutilisateur;
Where is the problem? Is it within the save method?
Thank you.
Edit:
I figured out that the problem was already solved by the solution of #jhadesdev and the data lines I was talking about were inserted when the triggers were actives.
Finally, I have to mention that by default the JUnit test seems to not insert data in the database (it inserts then rollback). In order to invalidate this behaviour we have to specify the #TransactionConfiguration(defaultRollback=false) annotation in the test class.
For example (in my case):
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:context/dao-context.xml" })
#TransactionConfiguration(defaultRollback=false)
#Transactional
public class UtilisateurRepositoryTest {
Hope it can help someone.
The problem is that two separate mechanisms are in place to generate the key:
one at Hibernate level which is to call a sequence and use the value to populate an Id column and send it to the database as the insert key
and another mechanism at the database that Hibernate does not know about: the column is incremented via a trigger.
Hibernate thinks that the insert was made with the value of the sequence, but in the database something else occurred. The simplest solution would probably be to remove the trigger mechanism, and let Hibernate populate the key based on the sequence only.

EclipseLink “drop-and-create-tables” no added column in inherited class, declired in super class with InheritanceType.TABLE_PER_CLASS

In already existing table structure inheritance I am adding a new column type (I cut some of the code)
#Entity
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public class Account {
......
#Column // already existed column
private String name; // get/set also applied
#Column(length=20) // new added column
#Enumerated(EnumType.STRING) // get/set also applied
private AccountType type;
..........
}
#Entity
public User extends Account {
................ // some other already existed fields
}
In my persistence.xml file I am using next strategy policy for DDL generation
property name="eclipselink.ddl-generation" value="drop-and-create-tables"
When DDL generation is processing the new added column type in Account table is successfully created, BUT for User table there is no such kind of column at all (the strategy is TABLE_PER_CLASS).
I fixed that when i drop the database and created it again. After that the current generation of DLL was applied - type in User is also added as a column. Does someone "met" with such kind of issue ? I fixed with with drop and create of the DB but I am not sure that should be the strategy in same cases in future, specially for production DB
Thanks,
Simeon Angelov
DDL generation is for development not production. The problem you are seeing is because when the table already exists, it cannot be created with the new field. Drop and create or the "create-or-extend-tables" feature will work if you are adding to the tables as described here http://wiki.eclipse.org/EclipseLink/DesignDocs/368365

JPA Relation or Secondary Table

I am reengineering one of my project with JPA which was initially on iBatis.
public class Entity{
//ids and other stuff
String locale;
String text;
}
I was storing locale and text into separate table, which can be achieved via secondary table in JPA.
I am not able to create secondary table with its own ids besides Join
id?
how can I achieve it? If possible then it raises the following
question:
how would I retrieve it back if I create an entity object with locale set to user settings
See,
http://en.wikibooks.org/wiki/Java_Persistence/Tables#Multiple_tables
Otherwise include your exact schema and model and issue.