Background
I have a problem with a JPA cascading batch update that i need to implement.
The update will take some 10000 objects and merge them into the database at once.
The objects have an average depth of 5 objects and an average size of about 3 kb
The persistance provider is Oracle Toplink
This eats a large amount of memory and takes several minutes to complete.
I have looked around and i see 3 possibilities:
Looping through a standard JPA merge statement and flushing at certain intervals
Using JPQL
Using Toplink's own API (Which i have no experience with whatsoever)
So i have a couple of questions
Will i reduce the overhead from the standard merge by using JPQL instead? If i understand correctly, merge causes to entire object tree to be cloned before being invoked. Is it actually faster? Is there some trick to speeding up the process?
How do i do a batch merge using to Toplink API?
And i know that this is subjective but: Does anyone have a best practice for doing large cascading batch updates in JPA/Toplink? Maybe something i didn't consider?
Related questions
Batch updates in JPA (Toplink)
Batch insert using JPA/Toplink
Not sure what you mean by using JPQL? If you can express your update logic in terms of a JPQL update statement, it will be significantly more efficient to do so.
Definitely split your work into batches. Also ensure you are using batch writing and sequence pre-allocation.
See,
http://java-persistence-performance.blogspot.com/2011/06/how-to-improve-jpa-performance-by-1825.html
Related
My question revolves around understanding the following two procedures (particularly performance and code logic) that I used to collect trade data from the US Census Bureau API. I already collected the data but I ended up writing two different ways of requesting and saving the data for which my questions pertain to.
Summary of my final questions comes at the bottom.
First way: npm request and mongodb to save the data
I limited my procedure using tiny-async-pool (sets concurrency of a certain function to perform) to not try to request too much at once or receive a timeout or overload my database with queries. Simply put, the bottleneck I was facing was the database since the API requests returned rather quickly (depending on body size 1-15 secs), but to save each array item (return data was nested array, sometimes from a few hundred items to over one hundred thousand items with max 10 values in each array) to its own mongodb document ranged from 100 ms to 700 ms. To save time from potential errors and not redoing the same queries, I also performed a check in my database before making the query to see if the query was already complete. The end result was that I did not follow this method since it was very error prone and susceptible to timeouts if the data was very large (I even set the timeout to 10 minutes in request options).
Second way: npm request and save data to csv
I used the same approach as the first method for the requests and concurrency, however I saved each query to its own csv file. In case of errors and not redoing successful queries I also did a check to see if the file already existed and if so skipped that query. This approach was error free, I ran it and after a few hours was able to have all the data saved. To write to csv was insanely fast, much more so than using mongodb.
Final summary and questions
My end goal was to get the data in the easiest manner possible. I used javascript because that's where I learned api requests and async operations, even though I will do most of my data analysis with python and pandas. I first tried the database method mostly because I thought it was the right way and I wanted to improve my database CRUD skills. After countless hours of refactoring code and trying new techniques I still could not get it to work properly. I resorted to the csv method which was a) much less code to write, b) less checks, c) faster, and d) more reliable.
My final questions are these:
Why was the csv approach better than the database approach? Any counter arguments or different approaches you would have used?
How do you handle bottlenecks and concurrency in your applications with regards to APIs and database operations? Do your techniques vary in production environments from personal use cases (in my case I just needed the data and a few hours of waiting was fine)?
Would you have used a different programming language or different package/module for this data collection procedure?
We're running a web application that serves around 700K RPM at peak times, which each one usually updating 2 documents in our database.
Our product's constraint is to update these documents immediatley (no lazy data dump), and we were wondering if it's more effective to use batch operations to do 2 updates, or just 2 separate update calls?
Thanks
A batch will save you one round trip for a second update. It might not make a big difference, but it's worth trying.
I'm trying to optimise ADO .NET (.Net 4.5) data access with Task parallel library (.Net 4.5), For an example when selecting 1000,000,000 records from a database how can we use the machine multicore processor effectively with Task parallel library. If anyone has found use full sources to get some idea please post :)
The following applies to all DB access technologies, not just ADO.NET.
Client-side processing is usually the wrong place to solve data access problems. You can achieve several orders of magnitude improvement in performance by optimizing your schema, create proper indexes and writing proper SQL queries.
Why transfer 1M records to a client for processing, over a limited network connection with significant latency, when a proper query could return the 2-3 records that matter?
RDBMS systems are designed to take advantage of available processors, RAM and disk arrays to perform queries as fast as possible. DB servers typically have far larger amounts of RAM and faster disk arrays than client machines.
What type of processing are you trying to do? Are you perhaps trying to analyze transactional data? In this case you should first extract the data to a reporting, or better yet, an OLAP database. A star schema with proper indexes and precalculated analytics can be 1000x times faster than an OLTP schema for analysis.
Improved SQL coding can also result in 10x-50x times improvement or more. A typical mistake by programmers not accustomed to SQL is to use cursors instead of set operations to process data. This usually leads to horrendous performance degradation, in the order of 50x times and worse.
Pulling all data to the client to process them row-by-row is even worse. This is essentially the same as using cursors, only the data has to travel over the wire and processing will have to use the client's often limited memory.
The only place where asynchronous processing offers any advantage, is when you want to fire off a long operation and execute code when processing finishes. ADO.NET already provides asynchronous operations using the APM model (BeginExecute/EndExecute). You can use TPL to wrap this in a task to simplify programming but you won't get any performance improvements.
It could be that your problem is not suited to database processing at all. If your algorithm requires that you scan the entire dataset multiple times, it would be better to extract all the data to a suitable file format in one go, and transfer it to another machine for processing.
I have several performance issue in my website.
I'm using asp.net mvc 2 and Entity Framework 4.0. I bought a Entity Framework Profiler to see what kind of SQL request that EF generated.
By example, some page take between 3 and 5 seconds to open. This is to much for my client.
To see if it's a performance problem with SQL generated by EF, I used my profiler and Copy / Paste the generated SQL in Sql Management Studio to see the execution plan and the sql statistic. The result show in less than a second.
Now that I eliminated the SQL query, I suspect EF at buidling query step.
I Follow the msdn step by step to pre-generate my view. I didn't see any performance gain.
How to be sure that my query use these Pre-Generated Views ?
Is there anything I can do to increase performance of my website ?
thanks
First of all, keep in mind that the pre-compiled queries still take just as long (in fact a little longer) the first time they are run, because the queries are compiled the first time they are invoked. After the first invocation, you should see a significant performance increase on the individual queries.
However, you will find the best answer to all performance questions is: figure out what's taking the most time first, then work on improving in that area. Until you have run a profiler and know where your system is blocking, any time you spend trying to speed things up is likely to be wasted.
Once you've determined what's taking the most time, there are a lot of possible techniques to use to speed things up:
Caching data that doesn't change often
Restructuring your data accesses so you pull the data you need in fewer round trips.
Ensuring you're not pulling more data than you need when you do your database queries.
Buying better hardware.
... and many others
One last note: In Entity Framework 5, they plan to implement automatic query caching, which will make precompiling queries practically useless. So I'd only recommend doing it where you know for sure that you'll get a significant improvement.
I'm setting up a new application using Entity Framework Code Fist and I'm looking at ways to try to reduce the number of round trips to the SQL Server as much as possible.
When I first read about the .Local property here I got excited about the possibility of bringing down entire object graphs early in my processing pipeline and then using .Local later without ever having to worry about incurring the cost of extra round trips.
Now that I'm playing around with it I'm wondering if there is any way to take down all the data I need for a single request in one round trip. If for example I have a web page that has a few lists on it, news and events and discussions. Is there a way that I can take down the records of their 3 unrelated source tables into the DbContext in one single round trip? Do you all out there on the interweb think it's perfectly fine when a single page makes 20 round trips to the db server? I suppose with a proper caching mechanism in place this issue could be mitigated against.
I did run across a couple of cracks at returning multiple results from EF queries in one round trip but I'm not sure the complexity and maturity of these kinds of solutions is worth the payoff.
In general in terms of composing datasets to be passed to MVC controllers do you think that it's best to simply make a separate query for each set of records you need and then worry about much of the performance later in the caching layer using either the EF Caching Provider or asp.net caching?
It is completely ok to make several DB calls if you need them. If you are affraid of multiple roundtrips you can either write stored procedure and return multiple result sets (doesn't work with default EF features) or execute your queries asynchronously (run multiple disjunct queries in the same time). Loading unrealted data with single linq query is not possible.
Just one more notice. If you decide to use asynchronous approach make sure that you use separate context instance in each asynchronous execution. Asynchronous execution uses separate thread and context is not thread safe.
I think you are doing a lot of work for little gain if you don't already have a performance problem. Yes, pay attention to what you are doing and don't make unnecessary calls. The actual connection and across the wire overhead for each query is usually really low so don't worry about it.
Remember "Premature optimization is the root of all evil".
My rule of thumb is that executing a call for each collection of objects you want to retrieve is ok. Executing a call for each row you want to retrieve is bad. If your web page requires 20 collections then 20 calls is ok.
That being said, reducing this to one call would not be difficult if you use the Translate method. Code something like this would work
var reader = GetADataReader(sql);
var firstCollection = context.Translate<whatever1>(reader);
reader.NextResult();
var secondCollection = context.Translate<whateve2r>(reader);
etc
The big down side to doing this is that if you place your sql into a stored proc then your stored procs become very specific to your web pages instead of being more general purpose. This isn't the end of the world as long as you have good access to your database. Otherwise you could just define your sql in code.