We are using the plugin https://goodies.pixabay.com/jquery/tag-editor/demo.html for our AutoComplete feature. We load the source with 3500 items. The performance gets too bad when the user starts typing and the autocomplete loads the filtered result after 6 to 8 seconds.
What are alternate approach that we can take for upto 4000 items for Autocomplete.
Appreciate your response!
are you using the minLength attribute from autocomplete?
on their homepage, they have something like this:
$('#my_textarea').tagEditor({ autocomplete: { 'source': '/url/', minLength: 3 } });
this effectively means, that the user has to enter at least 3 charaters before autocomplete will be used. doing so will usually reduce the amount of results from the autocomplete to a more sane count (like 20-30 maybe).
However, this might not necessarily be your problem. first you should figure out, if it's your server that's got a problem with responding fast (you can use your browser developer toolbar to see how long the requests takes to complete).
If the request takes 6-8 seconds, then you will have to optimize your server's code. On the other hand, if the response is quick, but tageditor needs a long time to build the suggestion list, then the problem is, that it might not be optimized for so many suggestions. in that case, the ultimate solution would be to rewrite the autocompletion module yourself or patch the existing one to better scale to your needs.
Do you go back to the server every time the user types in something to get the matching results?
I am using SPRING ehcache which gets all the items from database and stores in the server cache when the server is started. Whenever the user types the cached data is used which gets the results with few milliseconds. Some one else recommended me to use this.Below is the example for it
http://www.mkyong.com/ehcache/ehcache-hello-world-example/
I am using the jQuery autocomplete features with 2500 items without any issue.
here is the link where it being used http://www.all4sportsonline.com
Related
I'm currently on a Looker project, in the model in one of the explorers there are many joins that pull a lot of information, but as the customer says, this is just the beginning. I don't understand how to unload it, persist doesn't help, I tried to make a data_group.
Now I’ve picked up the PDTs, but I don’t understand how they can be implemented if i font have a querie. It is possible only through lookML to pull out a delay table or a sql that the looker has generated based on what we have in the look.
The looks like a regular table with 8 fields, it takes a long time to open, I tried to comment on unused dimensions and measures, I tried to delete those joins on which we do not depend, nothing comes out .... a fresh look may be needed
I hope you can help me.
What I have is some metric data I am collecting. I want to send out an alert, when I reach a specific error-rate on these metrics.
To make clear, my data looks something like this:
Timestamp
value (the runtime of a query)
state (error, success)
api-endpoint called
I have a grafana-Board doing some calculations, drops out something linke this:
error-rate
api-endpoint
number of calls made to the api endpoint
Fine for now - as I can read out on my grafana, I am able to send some error-messages/warnings, if the error-rate is too high. Works like a charm. But now comes the point:
If the first two (e.g.) calls to a specific api fail, I will instantly receive an alarm send by my grafana. I do not wan't that!
Is it possible - and if: how? - to alert me ONLY if this specific request was executed at least 5 times? It is no problem if this is a generic alert like "hey, something is wrong!" - but I need to figure out if the request triggering the alarm with 50-100% error-Rate was at least executed a specific amount of time before alarming.
It has to be done based on tags/fields, I do not want to add a single query for all of my 35+ APIs (number growing).
Any Idea anybody?
Using Grafana 8.0
Using InfluxDb 1.8 (with Flux enabled)
Just to make clear, if everyone ever want's to do the same: FluxQL is king - you can use filter functionality in there and only base data on datasets where value count is greater than X.
Yes: FluxQL is damn hot. I love it since a few month.
For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.
So I've started learning access due to necessity, as the person who was in charge of it passed way and someone had to keep it going.
I noticed a very bad (at least IMO) behavior in all databases he created: Every single form was bound directly to a table or saved query. This way, if the user opened a form, he had to complete all the steps he was supposed to do, because if he closed the form mid process (or the computer froze, or anything of the sort), the actual data would be compromised as it would be half complete. This often times broke everything in the process chain, rendering sub-sequential steps impossible to be performed and forced me to correct data manually directly in the tables.
As I've start upgrading his stuff and developing my own, I've been trying to learn ways to allow the data to be edited in the form only, making it possible to cancel the process anytime or save the changes all at once in the end.
If the editions were simple, I discovered that I could create a recordset, copy relevant data to unbound fields in the form and, in the end, if the user chose to, copy the data from the form fields back to the recordset.
Other times more complex solutions were required, as I would need to edit several pieces of data at once in continuous forms, "save" them, run more code, maybe add fields to hold the information originated from that processing and so on. I then learned about using temporary tables, but did not like it, since it tended to bloat the db. I even went on to creating temporary databases during code execution that would host the temporary tables and be destroyed in the end, but that added too much unnecessary complexity.
Nowadays I'm using disconnected ADO recordsets to hold the temporary data and fields. It works but has its limitations.
So I'm wondering, what is the best way you - much more experienced than me - guys use to approach this kind of scenario? Is using in memory ADO recordsets really the best way around?
I think you are mixing two things that a form does that have completely different requirements. Editing existing records (and bound forms are great for that) and creating new records (where using a straight bound form can result in creating incomplete records). The way to approach it depends on many things but mainly to how much data is necessary for a new record to be considered "complete".
I usually do one of the following things:
Create an unbound popup modal form for adding new records where only the necessary fields are present. Once complete it loads the new record to the main form for further editing.
Use the above method except the form is not a popup one but a set of unbound fields in the footer or header of the main form.
Let the user create new records but bind validation on the OnClose (and/or other appropriate to your situation) event of the form that deletes the half-filled record if it does not validate.
Let users create new records in the bound form but have a 'cleanup' routine called either on a schedule or based on user actions.
Ultimately if your business process requires the majority of fields to be manually added/edited every time a new record is added or edited, you are better off using an unbound form.
This way, if the user opened a form, he had to complete all the steps
he was supposed to do, because if he closed the form mid process (or
the computer froze, or anything of the sort), the actual data would be
compromised as it would be half complete
No, if the computer freezes, then no data is saved to the table. This is the same if you used a disconnected reocrdset and a un-bound form.
If you use the before update event in the form that has some verification code and does a simple cancel = true, then the forms data is not saved nor is the table updated. Again, if you used a dis-connected record set and the user closes the form, you have to test the data – and again you can either choose to write out the data or not – this effect is ZERO difference from using a bound form to a table or a disconnected form.
If the editions were simple, I discovered that I could create a
recordset, copy relevant data to unbound fields in the form and, in
the end, if the user chose to, copy the data from the form fields back
to the recordset.
No you don’t need to do the above. The above achieves nothing and only racks up additional development hours and increases cost of the application. In near all cases in-bound forms increase development costs over that of a simple form bound to a table. So the original developer had the correct idea. You can control the update of the underlying table in near all cases to achieve the required verification. Forms only save and write the data out if the developer allows as such.
So Access forms when bound no more or less write incomplete data out to a table if you place verification code in the forms before update event. A half-filled bound form, or a half filled un-bound form with dis-connected reocrdset BOTH will not write their data if the computer freezes.
And BOTH types of forms will not write out data to table until such time your verification code has completed.
Access is not designed for un-bound forms, and tools like vb.net, or even VB6 had a whole bunch of cool wizards and support for un-bound forms. In access, we don't have those wizards. And when you use UN-bound forms then you loose tons of form events. You in effect get the worst of both worlds, since you lose use of form events and have no wizards or support for un-bound. Even just the several delete record events we have are rather amazing.
You lose use of me.dirty, on-insert, me.newReocrd, forms after update events - the list of features you toss out and lose is huge. And if you want a button to write data to the table (such as a save button on the form), then just go:
If me.Dirty = True then
me.Dirty = False ' this forces your verification code to run
End if
There are FEW use cases in which in-bound forms will benefit you, but they will cost you rather much in terms of development times.
Contrived example:
{
productName: 'Lost Series 67 DVD',
availableFrom: '19/May/2011',
availableTo: '19/Sep/2011'
}
View storeFront/currentlyAvailableProducts basically checks if current datetime is within availableFrom - availableTo and emits the doc.
I would like to force a view to regenerate at 1am every night, i.e. process/map all docs.
At first I had a simple python script scheduled via crontab that touched each document hence causing a new revision and the view to update,however since couchdb is append only this wasnt very efficient - i.e. loads of unnecessary IO and disk space usage followed by compaction, very resource wasteful on all fronts.
Second solution was to push the view definition again via couchapp push however this meant the view was unavailable (or partially unavailable) for several minutes which was also unacceptable.
Is there any other solutions?
Will's answer is great; but just to get the consensus viewpoint represented here:
Keep one view, and query it differently every day
Determine your time-slice size, for example one day.
Next, for each document, you emit once for every time slice (day) that it is available. So if a document is available from 19 May to 21 May (inclusive), your emit keys would be:
"2011-05-19"
"2011-05-20"
"2011-05-21"
Once that is computed for every document, to find docs available on a certain day, just query the view with (e.g. today) ?key="2011-05-18".
You never have to update or re-run your views.
If you must never change your query URL for some reason, you might be able to use a _show function to 302 (temporary) redirect to today's correct query.
So your view is not being updated automatically I take it?
New and changed documents are not being added on the fly?
Oh I see, you're cheating. You're using "out of document" information (i.e. the current date) during view creation.
There's no view renaming, but if you were desperate you could use url rewriting.
Simply create a design document "each day": /db/_design/today05172011
Then use some url rewriting to change: GET /db/_design/today/_view/yourview
to: GET /db/_design/today051711/_view/yourview
Create the view at 11pm server time (tweak it so that "now" is "tomorrow", or whatever).
Then add some more clean up code to later delete the older views.
This way your view builds each night as you like.
Obviously you'll need to front Couch with some other web server/proxy to pull this off.
It's elegant, and inelegant, at the same time.