I am using hibernate 5.x version and oracle 12c.
I need to implement batch functionality for native query.
I have set batch property as well in config file but even then insert/update is happening for each statement and batching is not working.
Does hibernate/jpa support batching for native query?
Related
I'm migrating a system that used to be written in Jdbc Templates to Spring Data JPA. Before, we used to write the SQL queries ourselves, but now that we're modernizing our system, we're going to use Spring Data JPA for this. Due to restrictive factors, i can't implement a database migration framework, such as flyway and liquidbase. Is there any way i can implement those using what i have on Spring JPA?
I have an external Oracle Database for which I have access only to one view.
It will take more than 40 minutes for the view to provide results for which there are around 50,000 records in resultset.
We do not have control on optimizing the oracle view.
I have to process the resultset and persist to a table in another postgres database
Is using Spring Batch recommended for my requirement?
Yes, Spring Batch will work fine for this
I have a ETL job built using spring batch and DAO layer uses Spring's jdbc template.The issue is with loading of numeric datatype. When the batch is running for large number of records, good amount of numeric values(not all) will be loaded incorrectly(pattern is that value will be multiplied by 10^scale).
I am using batchUpdate method and preparedStatement. The driver used is jconn4.jar from sybase version 7.0.7.
I am getting the values printed while setting the ps, and do not see the values being manipulated at java side.
Could someone please advice on what could be causing this.
Thanks in advance.
Edit: Further info Sybase version 15.7, Spring core and jdbc version 4.2.6, Spring batch version 3.0.4, java version 8
Also has someone used sybPreparedStatement from jConnect library? I found a sybase infocenter link where they recommend using it perticularly when using numeric data types but I do find the documentation around how to use this ps insignificant. Could you pl share if you have tried using SybPS and what were the challenges, and if you could successfully use that.
I use MS data access application block for interaction with database and I saw its performance is good. When I like to add 100 or more records then I send those 100 records in xml format to a stored procedure and from there I do a bulk insert. Now I have to use Entity Framework. I haven't ever used EF before so I am not familiar with EF and how it works.
In another forum I asked a question like "How Entity Framework works in case of batch insert and update data" and got answer
From my experience, EF does not support batch insert or batch update.
What it does is that it will issue an individual insert or update statement, but it will wrap all of them in a transaction if you add all of your changes to the dbcontect before calling SaveChanges().
Is it true that EF can not handle batch insert/update? In case of batch insert/update EF inserts data in loop? If there are 100 records which we need to commit at once then EF can not do it?
If it is not right then please guide me how one should write code as a result EF can do batch insert/update. Also tell me the trick how to see what kind of SQL it will generate.
If possible please guide me with sample code for batch insert/update with EF. also tell me which version of EF support true batch operation. Thanks
Yes EF is not a Bulk load, Update tool.
You can of course put a a few K entries and commit (SaveChanges)
But when you have serious volumes of speed is critical, use SQL.
see Batch update/delete EF5 as an example on the topic
As there is no support for user defined functions or stored procedures in RedShift, how can i achieve UPSERT mechanism in RedShift which is using ParAccel, a PostgreSQL 8.0.2 fork.
Currently, i'm trying to achieve UPSERT mechanism using IF...THEN...ELSE... statement
e.g:-
IF NOT EXISTS(SELECT...WHERE(SELECT..))
THEN INSERT INTO tblABC() SELECT... FROM tblXYZ
ELSE UPDATE tblABC SET.,.,.,. FROM tblXYZ WHERE...
which is giving me error. As i'm writing this code independently without including it in function or SP's.
So, is there any solution to achieve UPSERT.
Thanks
You should probably read this article on upsert by depesz. You can't rely on SERIALIABLE for this since, AFAIK, ParAccel doesn't support full serializability support like in Pg 9.1+. As outlined in that post, you can't really do what you want purely in the DB anyway.
The short version is that even on current PostgreSQL versions that support writable CTEs it's still hard. On an 8.0 based ParAccel, you're pretty much out of luck.
I'd do a staged merge. COPY the new data to a temporary table on the server, LOCK the destination table, then do an UPDATE ... FROM followed by an INSERT INTO ... SELECT. Doing the data uploads in big chunks and locking the table for the upserts is reasonably in keeping with how Redshift is used anyway.
Another approach is to externally co-ordinate the upserts via something local to your application cluster. Have all your tools communicate via an external tool where they take an "insert-intent lock" before doing an insert. You want a distributed locking tool appropriate to your system. If everything's running inside one application server, it might be as simple as a synchronized singleton object.