kendo ui auto complete + virtualization + remove duplicate values - autocomplete

I want to remove the already added items from autocomplete list after server side filtering has been completed. I achieve the same using data method and remove items from the data received from the server. it is working fine when virtualization I off.
As soon as I implemented the virtualization the request for filter data has been started posting on the server in an infinite loop. I suspect that it is happening due to the page size is larger than the number of records in the data as we remove some data from the list and a total number of records is greater than the page size.
Is there any way to achieve the desired behavior?

Related

Powerful and cheap array algorithms

This is how the text messages are normally stored in Firebase realtime-Database
I am not fond of the idea that every time someone joins a group chat, they would need to download the entire e.g 20000 history text messages. Naturally, users wouldn't swipe all the way up to the very first message. However, in firebase realtime database, storing all messages under a given parent node will cause all messages to be downloaded once a user queries it (to join the group chat).
One possible efficiency solution:
Adding a second parent node that stores older text messages. E.g latest 500 text messages are saved under the main messages parent node and the rest of 19500 old text messages are saved on a different parent node. However, the parent node for old text messages will also need to be updated with newer old text messages. We would then need to download all 19500 old text messages as a consequence.
Perhaps the ideal case is to create up to N parent nodes that store packets of 300 text messages each. However, what consequence would their be with excessively creating parent nodes?
What efficiency solutions are recommended with a problem like this? Is there some technique I am forgetting or unaware of?
Just sort the list by date desc and limit to the last N messages. Or save the "inverse" date and sort on that. You can read more about it here.

Growing Tables with aggregations

I'm looking at creating a table that could potentially be loaded with 100s of rows. I was hoping to use the growing option that the tables provide. I have a few questions.
If I have a aggregation which is a total for all the rows in that column, will it be the total of all rows or only of those that have been loaded. Or can this be set with a variable etc.
similar to above the select all feature to tick all the rows, will this select every row even the ones not included, or will it just select the loaded rows. Again is this just a variable that I can set.
This is a first time really using any of the UI5 table elements, and the sap said this which I didn't really understand:
"Show Aggregations
Show aggregations (such as totals) on the table footer (sap.m.Column, aggregation: footer).
Do not show aggregations in “growing” mode. It is not clear, if an aggregation will only aggregate the items loaded into the front end, or all items."
For the growing tables, by default all actions and aggregations will only be processed for the data already loaded. Your citation from SAP means that it is not clear to the end user if the aggregated data refers to the visible data or to all data.
If you want to implement something like "Select all" or "Delete All", it would be better to implement this in the backend. From the guidelines of sap.m.List:
In multiple selection mode, users can (de)select all items using the shortcut CTRL+A. This only affects items that have already been loaded to the front-end server. All other items are not (de)selected before they are loaded (for example, items added via lazy loading with growingScrollToLoad). This conflicts with the guideline that all items the user can reach by scrolling must be (de)selected.
To process all items, listen to the selectionChange event and to its flag selectAll. This indicates whether CTRL+A was triggered. As soon as an action is triggered, process the items accordingly. Depending on the number of items, consider processing them in the back end.

Kibana showing different data on different visualizations for same condition or query

I have applied Kibana monitoring on S server. I am getting logs, set fields and everything seem to working fine. Now Pie Chart that I made for S Server response code, showing 519 404 hits while on server for today there are only 117 404 hits for today. I already ensured that I am seeing data for today as well as for S server only and no Server else.
Then to further drill down what going wrong, I make data table. When I don't add timestamp filed or disable it, then no of 404 response code shown as
Kibana-error-1
Now when I added timestamp field then 404 shown only on 3rd page and nowhere else and this way
kibana-error-2
This sort of 404 shown not match with server and even not matching in different visualizations. Please help me to understand where problem lies and how to resolve it.
The problem actually lies in your buckets filtering wherein you have split by rows. If you actually click on the Split Row button you can see there is a Size element which has been specified as 5. If your order is set as descending then, as per this it will give you the top 5 count results for a response as per timestamp.
So currently, the 2nd image which you have attached shows only the top 5 count per timestamp for each corresponding responses.
Hence, you can check for every response such as 404,200,300,301 etc (as received in Image 1), you will be getting only the top 5 count per timestamp for each responses in the 2nd image as you have attached.
Note: Due to top mentioned as 5 even in Image 1 which you have specified, it displays the top 5 responses as per count. There could be more responses as received which you can check by changing size from 5 to 10.

PowerApps datasource to overcome 500 visible or searchable items limit

For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.

jQuery Auto Complete - Performance Issue

We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com