I have an ag-grid with more than 100s of rows, but, as the screen vicinity is limited, we show about 20+ rows. As you scroll down, the other records were loaded asynchronously.
According to the documentation of ag-grid, it says that "exportDataAsCsv(params): Does the full export.." at this link https://www.ag-grid.com/javascript-grid-export/
But, the problem is that it doesn't export full records unless they were loaded fully onto the grid.
Is there any way to download the records, without scrolling to the end?
I expect that all rows will be exported without reaching the end of the grid
Thanks for reading
Related
So I have this web page that contain literally all rows in a table.
This table only have atm at most say a thousand rows. Anyways, it took the whole page at least 7-8 seconds to load, and it's unacceptable.
My current solution is either:
1) Cache the table on server side using play framework cache.
2) Cache table on client side by putting the table in separate html page, use htaccess, and load it on iframe.
3) Partially or don't load the table until user asks for it. From what i see, most of our users use the search bar to filter the table anyway, so what's the point of loading the whole table. But then again i can't speak for everyone.
I'm using play framework 1 and hibernate.
I've tried rewriting the query by selecting just the required fields rather than returning all fields (hibernate.findAll), but that doesn't seem to improve the load time.
I've used the play cache, load time was halved, but i read some forum entries that are against caching saying they're hard to manage.
Any suggestion?
The time spent loading needs to be broken into its composite pieces. If you are concerned about the query speed and index usage, you will want to determine the raw query being executed. Then, you can use an explain analyze command to determine what is actually happening.
When using the NatTable with a selection layer, if I have huge amounts (1million+) columns of data, selecting a row will take extremely long amounts of time (20 seconds+) or will outright crash my application. Is there a better way to handle selection of large amounts of data or maybe a way to select the entire amount but only visually show the amount of showing columns as selected and updating that as the table is scrolled?
It turns out that this is really a performance leak in NatTable. And interestingly it exists in that form for a long time and nobody has seen this until now.
I created a ticket [1] and work on a fix.
Until that you could try to remove or replace the "bad guys" from your composition. If that is not possible, you need to wait for the fix.
ColumnReorderLayer: if you don't need column reorder support, remove it from your layer stack (when talking about millions of columns, I suppose reordering is not a required feature)
ColumnHideShowLayer: if you don't need to support hiding columns, remove it from your layer stack. Not sure if you need it for your use case of showing millions of columns.
SelectionModel: I don't know your data model, but maybe the PreserveSelectionModel performs slightly better at the moment. Or have a look at the proposed fix attached to the ticket (once it is uploaded) and use a local version of that fix in your environment by creating a custom ISelectionModel implementation based on the fix.
[1] https://bugs.eclipse.org/bugs/show_bug.cgi?id=509685
I have a table that loads in data from a webservice that returns a bunch of JSON data. I load more data when the user scrolls down as the DB I am querying holds quite a bit of data.
The question I have is - will it be feasible to implement the right side alphabetical listing on such a table and how could this be done? It is definitely possible if I load in ALL the data and then sort them locally, populate the index and cache the data for every other time. But what if this is going to be 10K rows of data or more. Maybe load this data on application first launch is one option.
So in terms of performance and usability, does anyone have any recommendations of what is possible to do?
I don't think that you should download all data to make those indexes, it would decrease refreshing time and might cause memory problems.
But if you think that indexes could make a good difference than you can add some features to your server API. I would add either a different API call like get_indexes. Or even I would add POST parameter get_indexes which adds an array of indexes to any call which has this parameter set.
And you should be ready to handle cases when user taps on indexes without any downloaded data or when user just stresses out your app making fast index scrolling up and down.
First see how big the data download is. If the server can gzip the data, it may be surprisingly small - JSON zips very well because of the duplicated keys.
If it's too big, I would recommend modifying the server if possible to let you specify a starting letter. That way, if the user hits the "W" in the index you should be able to request all items that begin with "W".
It would also be helpful to get a total record count from the server so you can know how many rows are in the table ahead of time. I would also return a "loading..." string for each unknown row until the actual data comes down.
I am building an iPhone application .In the database I have 5000 records. Among them I am displaying only 50 in the app. But I want to ask would there be any memory issue if I create 5000 empty cells in the iPhone view initially even though I am displaying 50 rows of data?
If you build your table appropriately, you will only be using a handful to perhaps a dozen actual UITableViewCell objects which are constantly recycled as things show on screen.
Even 50 would be safe.
Having 5000 data objects in memory with 50 UITableViewCells should be pretty acceptable.
Especially if those data objects are small, or you are allowing CoreData to do some work for you with managing your data set.
The important thing is DO NOT MAKE 5000 TABLE CELL VIEWS. That is extremely poor practice.
iPhone has a limited amount of memory so you should always be careful to display only the data that is necessary for that view. You can implement infinite scrolling where when you reach the bottom of the screen through scrolling you trigger an event and load the next 25-50 records.
http://nsscreencast.com/episodes/8-automatic-uitableview-paging
One thing you'll quickly learn with the canonical way of handling tables is that regardless of the size of your model (i.e., the number of rows you intend to create), only a handful of rows are actually created, therefore the memory footprint remains low.
In essence, the UITableView initially creates and renders a screenful of rows (plus a few more for good measure). When you begin scrolling down, the controller recognises that it needs to draw a new row. But, it also realises that rows from the top of the table have disappeared from view. So, rather than create a whole new cell it simply takes one of the cells no longer in view and reconfigures it with the new info. No matter how many rows your table has, only those few cells live in memory.
So in your case, the memory bottleneck will likely be the model that is feeding the cell configuration. If you loaded all your 5000 rows into memory at once then that may be slow and memory consuming. But there is help at hand: you get a hint from the table controller that basically tells you that it wants to set up the *n*th row. So your model can in effect be more targeted and only load the data you need. E.g., since you know the 15th row is being rendered, then go and grab the 15th row from your database rather than preloading the entire model up-front.
This is the approach I've used to create apps with many more than 5000 rows without the need for paging. Of course it depends on your dataset as to how your user may navigate.
I've exported a cross-tab to excel (NOT data-only) and noticed that after every so many rows, it automatically repeats the column headers.
This makes it difficult to sort and filter. I don't understand why the column headers are being repeated, and there doesn't seem to be an option to disable this.
Anyone run into this issue before and managed to resolve it?
EDIT: it appears that crystal is automatically adding page breaks to the export. One solution I've found is to dissociate page size from printing page size, and then set the vertical length to be ridiculously large. Of course, the page size is still hardcoded so with sufficiently large amounts of data, the solution would still not work, but fortunately I have upper bounds on just how large the data set can be.
Did you set these Excel export options?