I am developing an app for S40, focused to work in the Nokia Ahsa 305. In some pages of the app, I show some tables filled with so many data (21 rows with 10 columns, like 210 Label data). When I show that table the memory use, rise a lot and when I try to show again the table, it gives me OutOfMemoryException.
Are there some guidelines that can be carried out for efficient memory management?
Here you can find some images from my memory diagram.
Before showing the table:
When I show the table:
After going back of the Table form
Memory shouldn't rise noticeably on that amount of data. I would doubt a table like this should take more than 200kb of memory unless you use images in some of the labels in which case it will take more.
A Component in Codename One shouldn't take more than 1kb since it doesn't have many fields within it, however if you have very long strings in every component (which I doubt since they won't be visible for a 200 component table).
You might have a memory leak which explains why the ram isn't collected although its hard to say from your description.
We had an issue with EncodedImage's in LWUIT 1.5 which we fixed in Codename One, however as far as I understood the Nokia guys removed the usage of EncodedImage's in Codename One resources which would really balloon memory usage for images.
Related
I have a requirement where we need to show around 24k records which has 84 cols in one go, as user wants filtering on entire set of data.
So can we have virtual scrolling mechanism with ag-grid without lazy loading?? If so could you please here. Any example are most welcome for reference.
Having tried this sort of thing with a similar number of rows and columns, I've found that it's just about impossible to get reasonable performance, especially if you are using things like "framework" renderers. And if you enable grouping, you're going to have a bad time.
What my team has done to enable filtering and sorting across an entire large dataset includes:
We used the client-side row model - the grid's simplest mode
We only load a "page" of data at a time. This involves trial and error with a reasonable sample of data and the actual features that you are using to arrive at the maximum page size that still allows the grid to perform well with respect to scrolling / rendering.
We implemented our own paging. This includes display of a paging control, and fetching the next/previous page from the server. This obviously requires server-side support. From an ag-grid point of view, it is only ever managing one page of data. Each page gets completely replaced with the next page via round-trip to the server.
We implemented sorting and filtering on the server side. When the user sorts or filters, we catch the event, and send the sort/filter parameters to the server, and get back a new page. When this happens, we revert to page 0 (or page 1 in user parlance).
This fits in nicely with support for non-grid filters that we have elsewhere in the page (in our case, a toolbar above the grid).
We only enable grouping when there is a single page of data, and encourage our users to filter their data to get down to one page of data so that they can group it. Depending on the data, page size might be as high as 1,000 rows. Again, you have to arrive at page size on a case-by-case basis.
So, in short, when we have the need to support filtering/sorting over a large dataset, we do all of the performance-intensive bits on the server side.
I'm sure that others will argue that ag-grid has a lot of advanced features that I'm suggesting that you not use. And they would be correct, for small-to-medium sized datasets, but when it comes to handling large datasets, I've found that ag-grid just can't handle it with reasonable performance.
I have a report which should give around 18000 pages after exporting and has 600K rows of record. It is giving me out of memory error after running. I have applied Virualization but it's not working. I also tried with increasing the memory size in tomcat server but after increasing the size the server is not starting.
From my experience you have not enough of RAM on your server.
Is it absolutely necessary to display report as web page? From our customers we have feedback that they never want to browse throught so many pages. It can be better to export them data directly into excel file where they have many option how to work with them.
One solution can be to have more records on one page which leads to generate less number of pages. But it this case with your RAM memory I am not sure that it helps
How to control the number of rows in a JasperReport
I have a tableview where i need to show around 10,000 rows (data stored locally in sqlite)
But when we jump to that view, app is getting blocked (freezed) as its loading all those rows.
Is there anyway to load data without freezing UI?
PS. Our app is rejected, is it because of this reason?
Thanks
EDIT:
They rejected and given following info
"Hello.
We noticed your app lacks native iOS functionality.
Please check out the videos for app design information,: "Getting Started
video: The Ingredients of Great iPhone Apps" and "iPhone User Interface
Design," available on the iOS Developer
Centerhttp://developer.apple.com/devcenter/ios,
and the iOS Human Interface
Guidelineshttp://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/MobileHIG.pdfin
particular, the sections, "Great
iOS Apps Embrace the Platform and HI Design
Principles"http://developer.apple.com/library/ios/#documentation/UserExperience/Conceptual/MobileHIG/Introduction/Introduction.htmland
"Human
Interface Principles"http://developer.apple.com/library/ios/#documentation/UserExperience/Conceptual/MobileHIG/Principles/Principles.html%23//apple_ref/doc/uid/TP40006556-CH5-SW1."
With ten thousand rows you have no choice but load your data on demand. Otherwise, your app has no chance of performing at a decent level. A freeze like you describe would definitely be enough to see your app rejected1.
This approach is rather wasteful with memory, too, because out of ten thousand rows you need at most two dozen at any given time.
A reasonably simple way to speed things up is to prepare an NSCache for your pages of data (say, ten items per page), add code that gets the total count of records, and modify the code that retrieves the data to read records from a single page (use LIMIT/OFFSET). When your table shows rows, it should try getting a page of the row from the cache. If the page is not there, it should load it from sqlite, and put in the cache. Using pages will minimize the number of roundtrips to the database; using cache will let you manage an optimal use of memory.
1 It does not look like your app has been rejected for a freeze this time around, but once you fix the "lacking native iOS functionality", the freeze will trigger another rejection.
Check this demo for lazy loading of all data along with images.
This sample demonstrates a multi-stage approach to loading and displaying a UITableView. It begins by loading the relevant text from an RSS feed so the table can load as quickly as possible, and then downloads the images for each row asynchronously so the UI is more responsive.
I'm making a single page application, and one approach I am considering is keeping all of the templates as part of the single-page DOM tree (basically compiling server-side and sending in one page). I don't expect each tree to be very complicated.
Given this, what's the maximum number of nodes on the tree before a user on a mediocre computer/browser begins to see performance degradation? For example, 6 views stored as 6 hidden nodes each with 100 subnodes of little HTML bits.
Thanks for the input!
The short of it is, you're going to hit a bandwidth bottleneck before you'd ever hit a DOM size bottleneck.
Well, I don't have any mediocre machines lying around. The only way to find out something like that is to test it. It will be different for every browser, every CPU.
Is your application javascript?
If yes, you should consider only loading in the templates you need usinx XHR, as you're going to be more concerned with loadtime for mobile than performance on a crappy HP from 10 years ago.
I mean, hearing what you describe should be technically reasonable for any machine of this decade, but you should not load that much junk up front.
A single page application doesn't necessitate bringing all the templates down at once. For example, your single page can have one or more content divs which are replaced at will dynamically. If you're thinking about something like running JSON objects thru a template to generate the HTML, the template could remain in the browser cache, the JSON itself stays in memory, and you can regenerate the HTML without any issue and avoid the DOM size issue.
What's the best way to store a very large image for an iOS app? I want an app to be able to view images that might be hundreds of megabytes, perhaps as much as a gigabyte as jpeg. I need to be able to store the image and retrieve selected areas for display.
Currently the images are cut into 512x512 pixel tiles and stored as jpeg files in a directory tree with tens of thousands of tiles (actually an image pyramid including downsamples).
Ignoring the question of displaying the image, I'm interested in the most efficient, manageable way to store this data on the device: files, like they currently are, in an sqlite database or something else?
Second part to the question. Is there a limit to the amount of data an app can store, or can an app keep importing data up to the storage limit of the device. I'm asking here about data that an app imports after it's installed.
The solution to this is to pre tile the enormous image so the tiles can be quickly retrieved from the file system on an as needed basis. One problem with very large images is that most solutions require the whole image to be rendered into a context, consuming vast amounts of memory. On a system like iOS, where memory is limited, the way to solve this is to use a library like libjeg or libjpegturbo to render an image a line at a time, then save the pixels into a raw file. The downside to doing this directly is that when you need one tile, you need to jump all over the file system finding each row of a tile. Thus a better solution is to not only incrementally scan, but incrementally tile too. You can use mmap to map the file into just the area you need, so you can really minimize memory consumption. That said, you can thrash the Unified Buffer Queue on iOS so badly the app crashes, or even the whole system!
If you are curious about how to implement the above solution, there is a freely available project on github - PhotoScrollerNetwork - that does all the above.
A sample from Apple: PhotoScroller
What about splitting into parts. Then it can be gathered by your application if needed