I have a report which should give around 18000 pages after exporting and has 600K rows of record. It is giving me out of memory error after running. I have applied Virualization but it's not working. I also tried with increasing the memory size in tomcat server but after increasing the size the server is not starting.
From my experience you have not enough of RAM on your server.
Is it absolutely necessary to display report as web page? From our customers we have feedback that they never want to browse throught so many pages. It can be better to export them data directly into excel file where they have many option how to work with them.
One solution can be to have more records on one page which leads to generate less number of pages. But it this case with your RAM memory I am not sure that it helps
How to control the number of rows in a JasperReport
Related
I have one e-commerce website in Magento 2.2.2 and it keeps on going down almost every day. Whenever it goes down, users get site took too long too respond and it never loads. To get web site working again I have to restart the server and then it works.
Total space on the server is 50GB. Out of which the whole website is around 18GB (11GB Media files and then vendor files etc.). Here are things that i cannot figure out why:
a.) The server shows that 33GB has been used although it should show only 18GB has been used. I have checked everywhere and I can't find what is consuming additional 15GB of space. Complete HTML folder is only 18GB.
b.) When I checked log files: it shows the following:
WARNING: Memory size allocated for the temporary table is more than 20% of innodb_buffer_pool_size. Please update innodb_buffer_pool_size or decrease batch size value (which decreases memory usages for the temporary table). Current batch size: 100000; Allocated memory size: 280000000 bytes; InnoDB buffer pool size: 1073741824 bytes.
I have already set innodb_buffer_pool_size to 2GB. But still, this problem keeps coming.
The server is an Amazon EC2 server and Magento is in production mode. Can allocating 100GB instead of 50GB will solve the problem?
Increased innodb buffer pool size to 10GB and logs do not show error anymore but server still goes down every day. Since RAM is only 4GB on our server, can that be the main cause? Because everyone is suggesting at least 8GB RAM?
Try the things below.
Magento2 has big log files and caching system. There may be increase your files in var folder.
But still you have to check whether your site belongs to more than 3000 products with high size images for products and you are storing all these in your server itself.
The suggestions what I can give, If your site have more products which I already mentioned better you have to use CDN for better performance. So the entire image will be process from the third party.
Next is You have to setup cloud flare to avoid the down time errors or customer side effect. You can make your index page to load while the server is down. And obviously you have to write script to restart the site automatically while its down.
In your server side check the memory size for php, you can better to give to 2G.
In Mysql side : Check the whether its making sleep query or not. If its making through your custom extension area ask your developer to optimize the code.
for eg : May be the code passing 'collection' for a single item fetch.
You can use the tool like nurelic
If everything is fine from your developer area please try to optimize the site with making memory limit mysql killing etc.. with your server side.
In the mean while magento is a big platform for e-commerce sector, so it has more area to cover by default. Better to avoid the unwanted modules from your active site, like disable the core modules which you are not using yet.
For an average site Use 16gb RAM,
A restart your mysql to make it effect ?
Also you need to set that buffer up to 20971520000, thats around 20GB.
Magento uses a lot of sessions and cache.
I am developing an app for S40, focused to work in the Nokia Ahsa 305. In some pages of the app, I show some tables filled with so many data (21 rows with 10 columns, like 210 Label data). When I show that table the memory use, rise a lot and when I try to show again the table, it gives me OutOfMemoryException.
Are there some guidelines that can be carried out for efficient memory management?
Here you can find some images from my memory diagram.
Before showing the table:
When I show the table:
After going back of the Table form
Memory shouldn't rise noticeably on that amount of data. I would doubt a table like this should take more than 200kb of memory unless you use images in some of the labels in which case it will take more.
A Component in Codename One shouldn't take more than 1kb since it doesn't have many fields within it, however if you have very long strings in every component (which I doubt since they won't be visible for a 200 component table).
You might have a memory leak which explains why the ram isn't collected although its hard to say from your description.
We had an issue with EncodedImage's in LWUIT 1.5 which we fixed in Codename One, however as far as I understood the Nokia guys removed the usage of EncodedImage's in Codename One resources which would really balloon memory usage for images.
So we run a downline report. That gathers everyone in the downline of the person who is logged in. Some people of clients run this with no problem as it returns less than 100 records.
Some people of clients however returns 4,000 - 6,000 rows which comes out to be about 8 MB worth of information. I actually had to up my buffer limit on my development machine to handle the large request.
What are some of the best ways to store this large piece of data and help prevent it from being run multiple times consecutively?
Can it be stored in a cookie?
Session is out of the question as this would eat up way to much memory on the server.
I'm open to pretty much anything at this point, trying to better streamline the old process into a much quicker efficient one.
Right now what is done, is it loads the entire recordset, it loops through the recordset building out the data into return_value cells.
Would this be better to turn into a jquery/ajax call?
The only main requirements are:
classic asp
jquery/javascript
T-SQL
Why not change the report to be paged? Phase 1: run the entire query, but the page only displays the right set of rows based on selected page. Now your response buffer problem is fixed. Phase 2: move the paging into the query using Row_Number(), now your database usage problem is fixed. Phase 3: offer the user an option of "display to screen" (using above) or "export to csv" where you can most likely export all the data, since csv is nice and compact.
Using a cookie seems unwise, given the responses to the question What is the maximum size of a web browser's cookie's key?.
I would suggest using ASP to create a file on the Web server and writing the data to that file. When the user requests the report, you can then determine if "enough time" has passed for it to be worth running the report again, or if the cached version is sufficient. User's login details could presumably be used for naming the file, or the Session.SessionID, or you could store something new in the user's session. Advantage of using their login would be that your cache of the report can exist longer than a user's session.
Taking Brian's Answer further, query page count, which would be records returned / items per page rounded up. Then join the results of every page query on client side. Pages start at a offset provided through the query. Now you have the full amount on the client without overflowing your buffer. And it can be tailored to an interface and user option (display x per page).
I activated the wicket DebugBar in order to trace my session size. When I navigate on the web site, the indicated session size is stable at about 25k.
In the same time, the pagemap serialiazed on the disk continuously grows from about 25k for each page view.
What does that means? From what I understood, the pagemap on disk keeps all the pages. But why the session stays always at about 25k.
What is the impact on a big website. If I have 1000 parallel web sessions, the web server will need 25Mo to hold them and the disk 250Mo (10 pages * 25k * 1000)?
I will make some load test to check.
The debug bar value is telling you the size of your session in memory. As you browse to another page, the old page is serialized to the session store. This provides, among other things, back button support without killing your memory footprint.
So, to answer your first question, the size on disk grows because it is holding historical data while your session stays about the same because it is holding active data.
To answer your second question, its been some time since I have looked at it, but I believe the disk session store is capped at 10MB or so. Furthermore, you can change the behavior of the session store to meet your needs, but that's a whole different discussion.
See this Wiki page which describes the storage mechanisms in Wicket 1.5. It is a bit different than 1.4 but there is no such document for 1.4
Update: the Wiki page has been moved to the guide: https://ci.apache.org/projects/wicket/guide/7.x/guide/internals.html#pagestoring
We're using Crystal 11 through their webserver. When we run a report, it does the Sql query and displays the first page of the report in the Crystal web reportviewer.
When you hit the next page button, it reruns the Sql query and displays the next page.
How do we get the requerying of the data to stop?
We also have multiple people running the same reports at the same time (it is a web server after all), and we don't want to cache data between different instances of the same report, we only want to cache the data in each single instance of the report.
The reason to have pagination is not only a presentation concern. With pagination the single most important advantage is lazy loading of data - so that in theory, depending on given filters, you load only what you need.
Just imagine if you have millions of records in your db and you load all of them. First of all is gonna be a hell of a lot slower, second you're fetching a lot of stuff you don't really need. All the web models nowadays are based on lazy loading rather than bulk loading. Think about Google App Engine: you can't retrieve more than 1000 records in a given transaction from the Google Datastore - and you know that if you'll only try and display them your browser will die.
I'll close with a question - do you have a performance issue of any kind?
If so, you probably think you'll make it better but it's probably not the case, because you'll reduce the load on the server but each single query will be much more resource consuming.
If not my advice is to leave it alone! :)