I must do a RPC call when my Presenter is revealled. That call result in a String[] with large amount of data. But this call is very very slow. It takes about 1min to finish.
After some tests, i discovered that ListBox.addItem() takes over 30% of this call. It's a huge time for just add String on that Widget.
What can I do to minimize this time?
Assuming that i need to load everything when my Presenter reveal.
Things that I've already done:
Put my Query inside a View(Doesn't affect too much)
Server read a Txt file instead of call DB(worst then View).
Use Collections classes ArrayList,Vector...(Vector reduced time by 5%)
I've noticed that GWT designed a LightweightCollections to improve use of Collections on Client side(It's my next step).
But what can I do about ListBox?
Too much choice is no choice.
You will not be able to tune up GWT Listbox/ValueListBox for purpose of displaying such huge amount data ( i am guessing entries in 1000's considering 20 seconds i.e 30% of 1 min). GWT Listbox is meant for selection. You cannot expect user to see 1000's of values , scroll and then select. Its a User Interaction nightmare.
The right approach is use to Async loaded SuggestBox for such huge data. With SuggestBox you can filters and display lesser data as choice based on users input keys.
If using SuggestBox is not feasible you must give a try for CellList in Cell Widgets ( they might show better performance ) - https://developers.google.com/web-toolkit/doc/latest/DevGuideUiCellWidgets
I am not sure but give GWTChosen a try - http://jdramaix.github.com/gwtchosen/
Related
I am using a DataGrid together with a ListDataProvider to display various row data in my application. For most cases, it was fine to fetch everything from the server at once.
Now, it appears to be necessary to fetch data on pagination steps. This means, my RPC call returns 10 items each time and the total count of possible results.
The total count is used to setup the attached SimplePager by manually calling datagrid.setRowCount(totalCount, true) after having set the row data. After is important here,
because setRowData also triggers a setRowCount call with the concrete number of items (always 10 in my case).
The problem is that after having set the row count manually, another participant, a ScheduledCommand triggers a flushCommand which in turn triggers a setRowCount call which
sets the count back to 10. The consequence: the pager shows 1-10 of 10 and the pager controls are disabled.
How can I enforce a certain rowCount even if the ListDataProvider only has 10 items each time?
You might suggest to use an AsyncDataProvider. However, there already is a quite complex generic design (AbstractTablePresenter<DTO, ...> implementing all the logic to fetch data and push it to a generic display)
which is backed by ListDataProviders. Hard to explain, but actually, I would prefer keep using the ListDataProvider.
For my usecase, the easiest fix was to subclass my AbstractTablePresenter for the on-demand case and use an AsyncDataProvider which brings all the features I need. The hurts concerning my design was less heavy than expected (knocking on my shoulders ;-) ).
Tried to subclass ListDataProvider first but the relationship of data, rowCount, rowCountEvents and the attached pager objects is so manifold that you would end up overriding most of the methods of ListDataProvider and your pager implementation.
We are building our new Next generation server for a medium sized back office application.
We already decided we would like to use a java framework for the client side (gwt \ vaadin \ zkoss)
What we would like now is to create a Proof Of Concept example of each technology.
our back office ui is pretty standard, we have tables \ grids with filters that should show entries straight from the DB.
Problem is we got huge amount of rows in each table (1M minimum)
which mean that we must use a load on demand tables for them.
My questions is: how do i implement a load on demand table for my big tables? I looked around and saw the following concept again and again:
you create a container, you populate it with data, the data is being displayed on the client side.
problem is i tried this naive way to populate the containers with 1M entries and it was awful. are there any built in on demand containers?>
any code examples \ references will be a huge help!
You would want to use GWT Cell Table, which has the AsyncDataProvider, that lets you handle the user's paging and sorting events by grabbing data from your server.
It also provides an alternative ListDataProvider, which lets you grab your data as a list of objects, then set that data to your table. If you use ListDataProvider, you have to define how to sort your objects with Comparators, and table will handle sorting and paging against that list.
Google "gwt celltable asyncdataprovider example" for more examples and tutorials.
Vaadin has a nice concept of lazy loading data in most of the components.
For example the table, list, dropdown's etc. have that concept.
The only thing you realy need to know at start, is the number of total rows.
Everything else can then be handled "ondemand".
For example the Table component initially only loads about 30 rows (can be customized)
and then fetches rows as needed. (Or better they are usually fetched just before the user scrols to the next rows)
A example is this demo
http://demo.vaadin.com/dashboard/#!/transactions
How you retrieve the data from your backend depends on the technology used.
But vaadin has working concepts where you don't need to load all 1mio. rows into memory,
it will handle the "fetch on demand" as the rows need to be displayed.
I have a case where my DataGrid might contain thousands of rows of data. My page size is only 50. So I want to get only so much data onto the client from server and load rows of data as and when required. Is there a default support from GWT to do that?
I tried using PageSizePager but then realized that the data is already sent to the client and that defeats what am trying to achieve.
Thanks.
Indeed. The aim (among others) of DataGrid, as well as of any other cell widget, is to display large data sets as fast as possible by using pagination and cells.
I suggest you to start with the official docs about CellTable (same applies to DataGrid), and pagination/data retrieval.
You'll probably need an AsyncDataProvider to asynchronously fetch data and an AbstractPager (say, SimplePager) to force retrieval of more data.
You can also decide to extend the current range of rows (say, infinite scroll), instead having multiple pages (by using KeyboardPagingPolicy.INCREASE_RANGE).
In general, the question is too general and can be hardly answered in few lines (or without specify more). The bits are all in the links above, and don't forget to have a look at the cell widgets samples in the showcase, as they cover almost everything you will need.
Hope to get you started.
I am making an API over HTTP that fetches many rows from PostgreSQL with pagination. In ordinary cases, I usually implement such pagination through naive OFFET/LIMIT clause. However, there are some special requirements in this case:
A lot of rows there are so that I believe users cannot reach the end (imagine Twitter timeline).
Pages does not have to be randomly accessible but only sequentially.
API would return a URL which contains a cursor token that directs to the page of continuous chunks.
Cursor tokens have not to exist permanently but for some time.
Its ordering has frequent fluctuating (like Reddit rankings), however continuous cursors should keep their consistent ordering.
How can I achieve the mission? I am ready to change my whole database schema for it!
Assuming it's only the ordering of the results that fluctuates and not the data in the rows, Fredrik's answer makes sense. However, I'd suggest the following additions:
store the id list in a postgresql table using the array type rather than in memory. Doing it in memory, unless you carefully use something like redis with auto expiry and memory limits, is setting yourself up for a DOS memory consumption attack. I imagine it would look something like this:
create table foo_paging_cursor (
cursor_token ..., -- probably a uuid is best or timestamp (see below)
result_ids integer[], -- or text[] if you have non-integer ids
expiry_time TIMESTAMP
);
You need to decide if the cursor_token and result_ids can be shared between users to reduce your storage needs and the time needed to run the initial query per user. If they can be shared, chose a cache window, say 1 or 5 minute(s), and then upon a new request create the cache_token for that time period and then check to see if the results ids have already been calculated for that token. If not, add a new row for that token. You should probably add a lock around the check/insert code to handle concurrent requests for a new token.
Have a scheduled background job that purges old tokens/results and make sure your client code can handle any errors related to expired/invalid tokens.
Don't even consider using real db cursors for this.
Keeping the result ids in Redis lists is another way to handle this (see the LRANGE command), but be careful with expiry and memory usage if you go down that path. Your Redis key would be the cursor_token and the ids would be the members of the list.
I know absolutely nothing about PostgreSQL, but I'm a pretty decent SQL Server developer, so I'd like to take a shot at this anyway :)
How many rows/pages do you expect a user would maximally browse through per session? For instance, if you expect a user to page through a maximum of 10 pages for each session [each page containing 50 rows], you could make take that max, and setup the webservice so that when the user requests the first page, you cache 10*50 rows (or just the Id:s for the rows, depends on how much memory/simultaneous users you got).
This would certainly help speed up your webservice, in more ways than one. And it's quite easy to implement to. So:
When a user requests data from page #1. Run a query (complete with order by, join checks, etc), store all the id:s into an array (but a maximum of 500 ids). Return datarows that corresponds to id:s in the array at positions 0-9.
When the user requests page #2-10. Return datarows that corresponds to id:s in the array at posisions (page-1)*50 - (page)*50-1.
You could also bump up the numbers, an array of 500 int:s would only occupy 2K of memory, but it also depends on how fast you want your initial query/response.
I've used a similar technique on a live website, and when the user continued past page 10, I just switched to queries. I guess another solution would be to continue to expand/fill the array. (Running the query again, but excluding already included id:s).
Anyway, hope this helps!
SqlDataReader is a faster way to process the stored procedure. What are some of the advantage/disadvantages of using SQLDataReader?
I assume you mean "instead of loading the results into a DataTable"?
Advantages: you're in control of how the data is loaded. You can ask for specific data types, and you don't end up loading the whole set of data into memory all at the same time unless you want to. Basically, if you want the data but don't need a data table (e.g. you're going to populate your own kind of collection) you don't get the overhead of the intermediate step.
Disadvantages: you're in control of how the data is loaded, which means it's easier to make a mistake and there's more work to do.
What's your use case here? Do you have a good reason to believe that the overhead of using a normal (or strongly typed) data table is significantly hurting performance? I'd only use SqlDataReader directly if I had a good reason to do so.
The key advantage is obviously speed - that's the main reason you'd choose a SQLDataReader.
One potential disadvantage not already mentioned is that the SQLDataReader is forward only, so you can only go through the records once in sequence - that's one of the things that allows it to be so fast. In many cases that's fine but if you need to iterate over the records more than once or add/edit/delete data you'll need to use one of the alternatives.
It also remains connected until you've worked through all the records and close the reader (of course, you can opt to close it earlier, but then you can't access any of the remaining records). If you're going to perform any lengthy processing on the records as you iterate over them, you may find that you impact other connections to the database.
It depends on what you need to do. If you get back a page of results from the database (say 20 records), it would be better to use a data adapter to fill a DataSet, and bind that to something in the UI.
But if you need to process many records, 1 at a time, use SqlDataReader.
Advantages: Faster, less memory.
Disadvantages: Must remain connected, must remember to close the reader.
The data might not be concluesive and you are not in control of your actions that why the milk man down the road has always got to carry data with him or else they gona get cracked by the data and the policeman will not carry any data because they think that is wrong to keep other people's data and its wrong to do so. There is a girl who lives in Sheffield and she loves to go out and play most the times that she s in the house that is why I dont like to talk to her because her parents and her other fwends got taken to peace gardens thats a place that everyone likes to sing and stay. usually famous Celebs get to hang aroun dthere but there are always top security because we dont want to get skanked down them ends. KK see u now I need 2 go and chill in the west end PEACE!!!£"$$$ Made of MOney MAN$$$$