Swap the order of items in a SQLite database - iphone

I retrieve an ordered list of items from a table of items in a Sqlite Database. How can I swap the id so the order of two items in the Sqlite database table?.

The id shouldn't determine position or ordering. It should be an immutable identifier.
If you need to represent order in a database you need to create another orderNumber column. A couple options are (1) either have values that span a range or (2) have a pointer to next (like a linked list).
For ranges: Spanning a range helps you avoid rewriting the orderNumber column for all items after the insert point. For example, in the range, insert first gets 1, insert 2nd gets max range, insert 3rd between first and second gets mid-range number - if you reposition you have to assign mid-points of the items it's between. One downside is if the list gets enough churn (minimized by a large span) you may have to rebalance the ranges. The pro of this solution is you can get the ordered list just by ordering by this column in the sql statement.
For linked list: If the database has a next column that points to the id that's after it in order, you need to update a couple rows to insert something. Upside is it's simple. Downside is you can't order in the sql statement - you're relying on the code getting the list to sort it.
One other variation is you could pull the ordered list data out of that table altogether. For example, you could have an ordered list table that has listid, itemid, orderedNumber. That allows you to have one or multiple logical ordered lists of the items in that table it references.
Some other references:
How to store ordered items which often change position in DB
Best way to save a ordered List to the Database while keeping the ordering
https://dba.stackexchange.com/questions/5683/how-to-design-a-database-for-storing-a-sorted-list

Related

Sqlalchemy way to handle select based on outdated data

Assuming I have a table sets with a field filters, containing array of key - value mappings, table items to which select query must be applied to extract rows based on these filters, and associated table for M:M relations to link each set with each item. I am seeking for a method or mechanism to cancel select query if sets.filters were updated, otherwise M:M relation will be built invalid as based on yet not refreshed filters.
The concrete scenario when a problem takes place is:
Receive file with items data, to parse, and insert into items returning new relevant ids(primary keys here);
After insertion, select from relevant sets for filters;
Take items ids and select from items using filters;
Update M:M association table for all the items returned at step 3.
So, unfortunately between step 3 and 4 or even earlier, API call makes an update on one of the sets rows, changing its filters. As the result - M:M table is invalid, because one filter was changed(lets say the filters contained kind of weight <= 100 kilos expression, however after the mentioned update it has become weight <= 50 kilos, so if there are some new items with weight greater than 50, those items ids should not be in M:M table, obviously).
Is there some efficient way to cancel select query from items during transaction? Or maybe there is a strong query to use. My idea is to rollback changes post-factum, checking sets.modified_at column. But it seems as doing additional job by wasting disk and cpu time.

how to paginate ordering by non-distinct / non-unique values in PostgreSQL?

How can I properly page by ordering on a column that could possibly have repeated values? I have a table called posts, which has a column that holds the number of likes of a certain post, called num_likes, and I want to order by num_likes DESC. But the image below shows a problem that I run into - the new row inserted between the two pages causes repeated data to be fetched.
This link here explains the problem, and gives the solution of keyset pagination, but from what I've seen, that only works if the column that the rows are being sorted on are distinct / unique. How would I do this if that is not the case?
You can easily make the sort key unique by adding the primary key to it.
You don't have to display the primary key to the user, just use it internally to tell “equal” rows apart.
For querying and indexing, you can make use of PostgreSQL's ability to compare like this: (num_likes, id) >= (4, 325698).

Merging Data in FMPro with modification of ID values

We are attempting to merge multiple datasets created in in filmmaker pro.
These datasets have multiple tables, and each entry within each table has a local ID that is used to relate entries between tables. The local ID values for all the entries were serially generated values, but some of the ID values are repeated between the different datasets, though the indicated records are non equivalent.
How can the ID values be updated in the data that is being imported to remove these overlaps without destroying the relationships that depend on them?
If you have access to the original database, you can try to migrate the ID's over to UUID or something unique before exporting. This has to be done manually, either cut/paste by hand or by a script.
Such a script will have do the following:
Loop through the parent records
For each record go to the related records
Generate an UUID with the get(UUID) function and put it in a variable
Replace the parent ID in the related record with this variable
Return to the parent record and replace the record ID with the variable.
Move to the next record.
Repeat until all records have been updated.

Data type for storing long lists

What is the correct way of storing large lists in PostgreSQL?
I currently have a "user" table, where the "followers" column stores a list of the followers that that user has. This list is stored in JSON, and every time the server wants to add a new user to that list, it retrieves it from the database, appends the new user, and then replaces the old list with the new list.
The problem is that these lists tend to get quite lengthy, which might affect performance. Is it possible to simply append to the list directly via SQL without retrieving it and rewriting it later?
Use a separate table for followers. The table should have at least two columns: userid and followerid. And it's good practice to have a primary key for this table as well, so let's give it a "ufid".
You can do a select to get all the elements and compute the JSON string if your application needs it. But do not work with JSON or any other string representation of the list, as it defeats the purpose of a relational database.
To add a new follower, simply add a new record to the follower table with the userid; deleting and update are also done on the record level without working with the "other records".
If followers is a list of integers which are primary keys to their accounts, make it an integer array int[]. If they are usernames or other words, go with a string array character varying[].
To append to an array column you can do this:
UPDATE the_table SET followers = followers || new_follower WHERE id = user;

iPhone Dev - Trying to access every row of a sqlite3 table sequentially

this is my first time using SQL at all, so this might sound basic. I'm making an iPhone app that creates and uses a sqlite3 database (I'm using the libsqlite3.dylib database as well as importing "sqlite3.h"). I've been able to correctly created the database and a table in it, but now I need to know the best way to get stuff back from it.
How would I go about retrieving all the information in the table? It's very important that I be able to access each row in the order that it is in the table. What I want to do (if this helps) is get all the info from the various fields in a single row, put all that into one object, and then store the object in an array, and then do the same for the next row, and the next, etc. At the end, I should have an array with the same number of elements as I have rows in my sql table. Thank you.
My SQL is rusty, but I think you can use SELECT * FROM myTable and then iterate through the results. You can also use a LIMIT/OFFSET(1) structure if you do not want to retrieve all elements at one from your table (for example due to memory concerns).
(1) Note that this can perform unexpectedly bad, depending on your use case. Look here for more info...
How would I go about retrieving all the information in the table? It's
very important that I be able to access each row in the order that it
is in the table.
That is not how SQL works. Rows are not kept in the table in a specific order as far as SQL is concerned. The order of rows returned by a query is determined by the ORDER BY clause in the query, e.g. ORDER BY DateCreated, or ORDER BY Price.
But SQLite has a rowid virtual column that can be used for this purpose. It reflects the sequence in which the rows were inserted. Except that it might change with a VACUUM. If you make it an INTEGER PRIMARY KEY it should stay constant.
order by rowid