Any tips or best practices for adding a new item to a history while maintaining a maximum total number of items? - iphone

I'm working on some basic logging/history functionality for a Core Data iPhone app. I want to maintain a maximum number of history items.
My general plan is to ignore the maximum when adding a new item and enforce it whenever I need to fetch all the items anyway (e.g. for searching or browsing the history). Alternatively, I could do it when adding a new item: fetch the current items, add the new one, and delete the oldest one if we're at the maximum. The second way seems less efficient, since I would be fetching all the items when I otherwise wouldn't need to.
So, the questions:
Which way is better? Is there an even better way to do this that I'm not considering?
How many items would be a reasonable maximum? The history is used for text field autocompletion, so more items means better usability, unless the number of items is so huge that it's slowing stuff down.
Thanks!

Whichever method is easier to implement is the right one. You shouldn't bother with a more efficient/more complicated implementation unless it proves it's needed.
If these objects are in a to-many relationship of some kind, I'd use the relationship to manage the maximum number. (Override add<Whatever>Object: and delete the extraneous items then).
If you're just fetching them, then that's really your only opportunity to filter them out. If you're using an NSArrayController, you might be able to implement a subclass that detects when new objects are added and chops off the extra ones.

If the items are added manually by the user, then you can safely use the method of cleaning up later. With text data, a user won't enter more a few hundred items at most and text data takes up very little room. If the items are added by software, you have to check every so many entries or risk spill over.
You might not want to spend a lot of time on this. Autocomplete is not that big, usually just a few hundred entries. I would right it the simplest way, with clean up later, and then fiddle with it only if you hit a definite performance bottleneck.
Remember, premature optimization is the root of all programming evil. That and the dweebs in marketing.

Related

When and how to load data for an infinite list when page/index of data is hard to know?

I'm writing a Flutter web/mobile calendar application / todo list, the main feature of which is a long list of items (appointments, tasks, and the like).
Much of the time, the list won't be longer than a few hundred items. But it should be able to scale to thousands of items, as many as the user wants. Not too hard to do should the user make many repeating items. I'm using ReorderableListView.builder to virtualize the list. The part I'm struggling with is when to load the list data and how to store it.
Important: When the user picks a day, the list can jump to somewhere in the middle... and can scroll back to the top of the list.
The easiest thing to do would be to just load all data up to the users position. But this means storing a potentially very large list in memory at best, and in the web application it means requesting way more data than is really needed.
A good summary of the problem might be: Jumping to particular day is more challenging than jumping to a known index on the list. It's not easy to know what an item's index would be in a fully constructed version of the list without fully constructing the list up to that item. Yes, you can get items at particular date, but what if you wanted to get fifty items before a particular date, and fifty items after a particular date (useful for keeping scrolling smooth)? There could be a huge gap, or there could be a whole ton of items all clustered on one day.
A complication is that not all items in the list are items in the database, for example day headers. The day headers need to behave like regular items and not be attached to other items when it comes to the reordering drag animation, yet storing them as records in the database feels wrong. Here's an image of the web application to give you some idea:
THIS ANSWER IS MY OWN WORK IN PROGRESS. OPEN TO BETTER ANSWERS, CORRECTIONS.
I like the answer here (Flutter: Display content from paginated API with dynamic ListView) and would like to do something like it.
I would do this both for web app, where there's http bottlenecks. I would also do this for the mobile app, even when all data is in the database. Why keep the entire list in memory when you don't have to? (I have very little mobile development experience, so please correct me if I'm wrong)
Problem:
Jumping to particular day is more challenging than jumping to known index on the list. It's not easy to know what an item's index would be in a fully constructed version of the list without fully constructing the list up to that item.
Solution I'm leaning toward:
ReferencesList + keyValueStorage solution
Store the item ids in order as well as key value pairs of all items by id as a json list in NoSql. This list will include references to items, with day headings included and represented by something like 'dh-2021-05-21' or its epoch time. This list will be very lightweight, just a string per item, so you don't need to worry about holding it all in memory. Item data can be pulled out of storage whenever it's built. (Should this be a problem in Sembast or hive? Hive, and here's why: https://bendelonlee.medium.com/hive-or-sembast-for-nosql-storage-in-flutter-web-fe3711146a0 )
When a date is jumped to, run a binary search on this list to find its exact position and scroll to that position. You can easily preload, say 30 items before and 50 items after that position.
other solutions:
SplayTreeMap + QuerySQLByDate Solution:
When jumping, since you don't know the index, insert it into a new SplayTreeMap at an arbitrarily high index, say 100 * number_of_list_items_in_database just to be safe. Run two queries, on ascending from scrolled to date, and one descending from it. Cache these queries and return them in paged form. Should the user reach the beginning of the list, simply prevent them from scrolling further up the list manually with a ScrollController or something like it.

MongoDB default items with user overwrite

So the problem seems very simple but every way I approach the solution it seems to be a poor implementation approach with either duplicated content or messy data.
The Problem
I want to provide an option for “overwrites” per user on “default” items. Basically I have a mongodb database with a collection containing items with the following information:
ID
Name
Icon
Description
There is a set of 20-30 items in this collection, which each user using the app views.
Most users will be happy to see the default but if a user wishes to say change the icon for an item or the name then how do I handle this “overwrite” on the ”default” item for just that single user.
Possible solutions
My thoughts are to implement one of the following options but all just seem a little wrong (I have provided my thoughts on this):
for each “overwriten” item add a duplicated item to the collection with the changes and a user_id field to link the user - this seems like a little bit of duplicated content as the user might only change the icon and not the name/description. Also if the name is changed int eh future on the default item how do you handle this and also how do you understand that this item must replace one of the “defaults” for just that user. I worry it will be a little bit of a performance issue too when looking up the items and then replacing the changed item
having all the items duplicated per user in the same collection - very much duplication of content but might be the best performing option but could cause issues in the future if new “default” items need to be added or default options need changing
collection per user - same as the previous. This options seems all kinds of wrong but maybe I’m just new to this and it is actually the best option.
collection containing overwrites - this seems like a good idea but equally a bad one due to looking up and comparing. If everything is changed then why not just have all new items rather than effectively a find a replace.
Reason for wanting to get this right
Maybe I’m over thinking this but it seems like I will face this issue a lot and I think I need to get it right to remove future issues with performance, data management and updates to default items.

Database and item orders (general)

I'm right now experimenting with a nodejs based experimental app, where I will be putting in a list of books and it will be posted on a forum automatically every x minutes.
Now my question is about order of these things posted.
I use mongodb (not sure if this changes the question or not) and I just add a new entry for every item to be posted. Normally, things are posted in the exact order I add them.
However, for the web interface of this experimental thing, I made a re-ordering interaction where I can simply drag and drop elements to reorder them.
My question is: how can I reflect this change to the database?
Or more in general terms, how can I order stuff in general, in databases?
For instance if I drag the 1000th item to 1st order, everything below needs to be edited (in db) between 1 and 1000 the entries. This does not seem like a valid and proper solution to me.
Any enlightenment is appreciated.
An elegant way might be lexicographic sorting. Introduce a String attribute for each item. Make the initial length of the values large enough to accomodate the estimated number of items. E.g., if you expect 1000 items, let the keys be baa, bab, bac, ... bba, bbb, bbc, ...
Then, when an item is moved from where it is to another place between two items, assign a value to the sorting attribute of the moved item that is somewhere equidistant (lexicographically) to those items. So to move an item between dei and dej, give it the value deim. To move an item between fadd and fado, give it the value fadi.
Keys starting with a were initially not used to leave space for elements that get dragged before the first one. Never use the key a, as it will be impossible to move an element before this one.
Of course, the characters used may vary according to the sort order provided by the database.
This solution should work fine as long as elements are not reordered extremely frequently. In a worst case scenario, this may lead to longer and longer attribute values. But if the movements are somewhat equally distributed, the length of values should stay reasonable.

dijit.form.select dropdown is very slow

Loading 3000 values into dijit.form.select control takes longer time. browser gets hanged even for 500 values. How to over come this problem?
Any assistance would be really appreciated.
Thanks,
Karthihck k.
Loading 3,000 of anything into a web page is always going to be slow.
Although there are twisted ways to overcome this limitation, it may not be worth it for your user is definitely not going to like scrolling through 3,000 items to pick one!
I'd suggest you break this drop-down list into two (or three) levels, and have no more than 20-30 choices each. In one of my own projects with thousands of list items, I had to go with four levels, otherwise performance gets abysmal.
If you only have one long list to work with, consider breaking it down by the starting letter into 26 groups, like a phone list. At least you'll have only 100-200 in each group.
Now, if you really really want to load such a long list, consider not using dijit.form.Select as it is just a simple wrapper for the <select> tag. You're essentially inserting one <option> tag at a time, killing performance. You have two choices:
Create the list of <option> tags yourself off-line, then insert into the <select> element in one go.
Consider dijit.form.FilteringSelect instead.
Now, I definitely don't endorse doing the above. You've been warned!

What are some of the advantage/disadvantages of using SQLDataReader?

SqlDataReader is a faster way to process the stored procedure. What are some of the advantage/disadvantages of using SQLDataReader?
I assume you mean "instead of loading the results into a DataTable"?
Advantages: you're in control of how the data is loaded. You can ask for specific data types, and you don't end up loading the whole set of data into memory all at the same time unless you want to. Basically, if you want the data but don't need a data table (e.g. you're going to populate your own kind of collection) you don't get the overhead of the intermediate step.
Disadvantages: you're in control of how the data is loaded, which means it's easier to make a mistake and there's more work to do.
What's your use case here? Do you have a good reason to believe that the overhead of using a normal (or strongly typed) data table is significantly hurting performance? I'd only use SqlDataReader directly if I had a good reason to do so.
The key advantage is obviously speed - that's the main reason you'd choose a SQLDataReader.
One potential disadvantage not already mentioned is that the SQLDataReader is forward only, so you can only go through the records once in sequence - that's one of the things that allows it to be so fast. In many cases that's fine but if you need to iterate over the records more than once or add/edit/delete data you'll need to use one of the alternatives.
It also remains connected until you've worked through all the records and close the reader (of course, you can opt to close it earlier, but then you can't access any of the remaining records). If you're going to perform any lengthy processing on the records as you iterate over them, you may find that you impact other connections to the database.
It depends on what you need to do. If you get back a page of results from the database (say 20 records), it would be better to use a data adapter to fill a DataSet, and bind that to something in the UI.
But if you need to process many records, 1 at a time, use SqlDataReader.
Advantages: Faster, less memory.
Disadvantages: Must remain connected, must remember to close the reader.
The data might not be concluesive and you are not in control of your actions that why the milk man down the road has always got to carry data with him or else they gona get cracked by the data and the policeman will not carry any data because they think that is wrong to keep other people's data and its wrong to do so. There is a girl who lives in Sheffield and she loves to go out and play most the times that she s in the house that is why I dont like to talk to her because her parents and her other fwends got taken to peace gardens thats a place that everyone likes to sing and stay. usually famous Celebs get to hang aroun dthere but there are always top security because we dont want to get skanked down them ends. KK see u now I need 2 go and chill in the west end PEACE!!!£"$$$ Made of MOney MAN$$$$