Using PostgreSQL, Hibernate, and JPA, how to guarantee instant update of a db view that is taking values from multiple tables when updating tables? - postgresql

I'm using db view that is carrying values from multiple tables using postgresql db, mapped by hibernate, and managed by JPA.
I want to make a join with the view right after updating a value in table column that is mapped to the view.
My question is how to guarantee instant view update? Also, does it happen after finishing the transaction by the hibernate?

Related

Slow insert when using #SecondaryTable with hibernate

When I use a secondary table, saving the entities takes 2 times the time it takes without a secondary table.
However, the inserts to the secondary table (postgresql) takes less than 1ms according to postgres logs. So, I guess it's something with hibernate itself. Is there any known performance issue with secondary tables in hibernate? I'm using hibernate 2.1

Can Spring-JPA work with Postgres partitioning?

We have a Spring Boot project that uses Spring-JPA for data access. We have a couple of tables where we create/update rows once (or a few times, all within minutes). We don't update rows that are older than a day. These tables (like audit table) can get very large and we want to use Postgres' table partitioning features to help break up the data by month. So the main table always has this calendar month's data but if the query requires retrieval from previous months it would somehow read it from other partitions.
Two questions:
1) Is this a good idea for archiving older data but still leave it query-able?
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Thanks.
I am working with postgres partitioning with Hibernate & Spring JPA for a period of time. So I think, I can try to answer your questions.
1) Is this a good idea for archiving older data but still leave it query-able?
If you are applying indexes and not re-indexing table frequently, then partitioning of data may result faster query results.
Also you can use clustered index feature in postgres as well to fetch the data faster.
Because table with older data will not going to be updated, so clustered index will improve the performance efficiently.
2) Does Spring-JPA work with partitioned tables? Or do we have to figure out how to break up the query and do native queries and concatenate the restult set?
Spring JPA will work out of the box with partitioned table. It will retrieve the data from master as well as child tables and returns the concatenated result set.
Note : Issue with partitioned table
The only issue you will face with partitioned table is insertion in partitioned table.
Let me explain, when you partition a table, you will create a trigger over master table, and that trigger will return null. This is the key behind insertion issue in partitioned table using Spring JPA / Hibernate.
When you try to insert a row using Spring JPA or Hibernate you will face below issue
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
To overcome this issue you need to override implementation of Batching batcher.
In hibernate you can provide the custom implementation of batcher factory using below configuration
hibernate.jdbc.factory_class=path.to.my.batcher.factory.implementation
In spring JPA you can achieve the same by custom implementation of batch builder using below configuration
hibernate.jdbc.batch.builder=path.to.my.batch.builder.implementation
References :
Custom Batch Builder/Batch in Spring-JPA
Demo Application
In addition to the #Anil Agrawal answer.
If you are using spring boot 2 then you need to define the customBatcher using the property.
spring.jpa.properties.hibernate.jdbc.batch.builder=net.xyz.jdbc.CustomBatchBuilder
You do not have to break down the JDBC query with postgres 11+.
If you execute select on the main table with plain jdbc, the DB would return the aggregated results from the partitioned tables.
In other words, the work is done by the Postgres DB, so Spring JPA will simply get the result and map it to objects as if there were no partitioning.
For having inserts work in a partitioned table you need to make sure that your partitions are already created, i think spring data will not create them for you.

How to add new table to the database using sql workbench

I was creating MySQL database to add medicine.I created a table and I need to add one more tabe.After creating it I tried to query the database from the sql workbench.But it donot show the table but it is present in the EER Model.How can I solve this problem.
Modeling is just the task of abstractly designing your schema and its objects (e.g. tables, views etc.). It does not actually create these objects. For this you have to forward engineer your model to a server (see Database menu). Once done you can use the Synchronization feature to update either model or server (or both) with any changes made.
But keep in mind this is only for the objects, not for any data.

Update and select records in Entity Framework

I have multiple processes accessing the same database table. The table holds "TakenBy" column that is supposed to hold the ID of the taker process.
Entity Framework is my data access layer.
My question would be how can I use my DataContext object so I can retrieve rows from the above table, and have the "TakenBy" column updated at the same time.
This would allow me to overcome race-condition with the other processes, who also try to get the same records.
EF will not handle that for you. You must either use stored procedure or you must perform update once you load the record through your application and handle concurrency (either by optimistic way which means to use times tamp or row version column or by pessimistic way which means manual SQL query).

What are the differences in EF when using your own Insert, Update and Delete Functions?

I am looking into adding history tables to my database. The easiest way is to intercept all Insert, Update and Delete calls that EF Makes and add in a merge that will also insert a history row into a history table.
Right now all my Entities just let EF figure out how to do the inserts, updates and deletes.
If I go and add in stored procedures (instead of the EF Generated stuff) will EF still function the same on the business tier?
Or does it change how I have to work with my entities? If so, how?
Everything works the same, it is transparent.
Stored procedures need to return the rows affected, in order for EF to know that the update succeeded or not. Additionally, if you do an update and need to map any property back to your entity (e.g. timestamps) you must select them in the sproc and then map them back in the EF designer (since you can only have one output parameter, and that should be the rows affected).
You might consider using triggers on the DB to solve your issue, though?
Doing this in stored procedures means that you will write all inserts, updates and deletes yourselves. It is like throwing 30% of feature set (and 50% productivity) away. Create audit records in your application and save them together with main records through EF.