What is the best way to reorder 1000 rows at one shot - drag-and-drop

I have 1000 records in a table which holds position field starting from 1 to 1000. Now I want to implement the reorder functionality for 1000 records. Suppose, If I move the 1000th record to 1st position then the 1st record should move to 2nd positon, 2nd record move to 3rd position and 999th record move to 1000th position.
NOTE: I am showing 20 records per page.
I have implemented the reorder functionality using jqGrid drag and drop plugin. Using this technique it is very simple to update 20 records position at once. On the MySQL side, I will fire 20 update queries to update the 20 records position.
Now, I want to have a textbox field in the position column which holds the current record position. So that, user can move any record to any position by entering the position number in the text field regardless of drag and drop. Suppose, I am in the 50th page and I want to move the 1000th record to 1st position, I will enter the position number in the 1000th position textfield as 1. Once I entered the position number, the reorder logic should take place as I said in the first paragraph.
Now, anyone please tell me how can I update 1000 records at once? and what will be the MySQL load? What is the best way to achieve this functionality?
NOTE: I don't want to fire 1000 update queries (i.e. to avoid MySQL deadlock condition) as I did in the drag and drop functionality.
Thanks for anyone help in advance.

Hundreds of updates is kind of ridiculous.
Try something like this:
UPDATE Records SET
SequenceNumber = SequenceNumber + 1
WHERE SequenceNumber >= #Lowbound AND SequenceNumber <= #UpperBound;
UPDATE Records SET
SequenceNumber = #Lowbound
WHERE ID = #SelectedId;

Related

Adding values to each row in Swift but not when row is dequeued & reused

I'm getting info from a GPS service giving an array of addresses, each one containing a duration of time in seconds. So from point A to Point B the time duration is 250 seconds and from Point B to C 100 seconds. It does not add the times together to make a complete route time, I have to do that on my own. In above example I would want to see 250 on the first row and 350 on the 2nd row etc. So I am calculating the values together to get the true total for each address by adding them from row to row right inside my table. I grab each address's value from the route's array based on my table index.path row. I populate that into a table and it works beautifuly as so:
let secondsToAdd = Int(self.thisRoute.ttaForSubleg(UInt(indexPath.row)).duration)
duration += secondsToAdd
However when table is scrolled up and down and cells are dequeued and re-queued values are getting added to again and again. I need to reset the duration somewhere to 0 as so:
duration = 0
but I'm stumped as to where!
Without another solution, my only way to do this I think would be to build and maintain an array separately outside the table instead of building it inside the table as I go along. If somebody has a better solution please post!

Force sap.m.List to load item

I have a list with growingThreshold set to 50 . I have 100 records. The function list.getItems() returns only the first 50 due to the threshold.
My question is: Is there a way to force the list to load a specific item? For example, I know that there are 100 records and I want to select the 61st record.
Thanks.

Is there any way to avoid PostgreSQL placing the updated row as the last row?

Well, my problem is that each time that I make an update of a row, this row goes to the last place in the table. It doesn't really matter where was placed before.
I've read in this post Postgresql: row number changes on update that rows in a relational table are not sorted. Then, why when I execute a select * from table; do I always get the same order?
Anyway, I don't want to start a discussion about that, just to know if is there any way to don't let update sentence place the row in the last place.
Edit for more info:
I don't really want to get all results at all. I have programmed 2 buttons in Java, next and previous and, being still a begginer, the only way that I had to get the next or the previous row was to use select * from table limit 1 and adding offset num++ or offset num-- depending of the button clicked. So, when I execute the update, I lose the initial order (insertion order).
Thanks.
You could make some space in your tables for updates. Change the fill factor from the default 100%, no space for updates left on a page, to something less to create space for updates.
From the manual (create table):
fillfactor (integer)
The fillfactor for a table is a percentage
between 10 and 100. 100 (complete packing) is the default. When a
smaller fillfactor is specified, INSERT operations pack table pages
only to the indicated percentage; the remaining space on each page is
reserved for updating rows on that page. This gives UPDATE a chance to
place the updated copy of a row on the same page as the original,
which is more efficient than placing it on a different page. For a
table whose entries are never updated, complete packing is the best
choice, but in heavily updated tables smaller fillfactors are
appropriate. This parameter cannot be set for TOAST tables.
But without an ORDER BY in your query, there is no guarantee that a result set will be sorted the way you expect it to be sorted. No fill factor can change that.

Tableau Future and Current References

Tough problem I am working on here.
I have a table of CustomerIDs and CallDates. I want to measure whether there is a 'repeat call' within a certain period of time (up to 30 days).
I plan on creating a parameter called RepeatTime which is a range from 0 - 30 days, so the user can slide a scale to see the number/percentage of total repeats.
In Excel, I have this working. I sort CustomerID in order and then sort CallDate from earliest to latest. I then have formulas like:
=IF(AND(CurrentCustomerID = FutureCustomerID, FutureCallDate - CurrentCallDate <= RepeatTime), 1,0)
CurrentCustomerID = the current row, and the FutureCustomerID = the following row (so it is saying if the customer ID is the same).
FutureCallDate = the following row and the CurrentCallDate = the current row. It is subtracting the future call time from the first call time to measure the time in between.
The goal is to be able to see, dynamically, how many customers called in for a specific reason within maybe 4 hours or 1 day or 5 days, etc. All of the way up until 30 days (this is our actual metric but it is good to see the calls which are repeats within a shorter time frame so we can investigate).
I had a similar problem, see here for detailed version Array calculation in Tableau, maxif routine
In your case, that is basically the same thing as mine, so you could apply that solution, but I find it easier to understand the one I'm about to give, I would do:
1) Create a calculated field called RepeatTime:
DATEDIFF('day',MAX(CallDates),LOOKUP(MAX(CallDates),-1))
This will calculated how many days have passed since the last call to the current. You can add a IFNULL not to get Null values for the first entry.
2) Drag CustomersID, CallDates and RepeatTime to the worksheet (can be on the marks tab, don't need to be on rows or column).
3) Configure the table calculation of RepeatTIme, Compute using Advanced..., partitioning CustomersID, Adressing CallDates
Also Sort by Field CallDates, Maximum, Ascending.
This will guarantee the table calculation works properly
4) Now you have a base that you can use for what you need. You can either export it to csv or mdb and connect to it.
The best approach, actually, is to have this RepeatTime field calculated outside Tableau, on your database, so it's already there when you connect to it. But this is a way to use Tableau to do the calculation for you.
Unfortunately there's no direct way to do this directly with your database.

db2: select from table without replacement

Hi I would just like to ask a simple question - is there a way in DB2, to select a row from a table (whether that be based on a join or selecting a random row), and then select from the same table again where choosing the last, or any previous rows cannot be selected.
I am thinking I have to loop my code through each row in the table and delete each row I select, but would be interested if anyone has an alternative solution. No code needed but rather describe another approach.
Thanks,
Arron
The simplest way of doing this is to declare a cursor to select all rows from the table then process the
cursor one row at a time. Each row will be selected exactly 1 time (this is pretty much what a cursor is all about).
I suspect that is not the answer you were looking for. You most likely have at least two other constrains on this
selection problem:
You do not want, or cannot, have a single cursor open until the entire table has been processed
You want to have some sort of "randomness" with respect to the order in which rows are selected
The problem of not being able to open and process the entire table under a single cursor can be solved by
maintaining some sort of "state" information between selections. The "state" can be used to determine whether a row
is still eligible for selection on subsequent inquiries. You might add another column to the table to hold the "selected"
state of that row. When a row is inserted its "selected" state is set to "no". On each select operation the state
of the selected row is updated to "yes". The predicate to select new rows then needs to have a WHEN SELECT_STATE = 'no'
added to it to disqualify previously selected rows. If you cannot change the structure of the table you are selecting
from, then add a second table having the same primary key as the selection table plus the "selected" indicator then join
these tables to obtain the required state information.
Another approach is to delete a row once it has been selected.
These or some similar type of state management can be used to solve the selection eligibility problem.
If you need to introduce randomness into the selection process (i.e. make it difficult go guess what
the next row to be selected will be), then you have a very different problem to solve. If this is the case
please ask a new question outlining the approximate size of you table (how many rows) and what the key structure
is (eg. a number between 1 and 100000, a 30 character name etc.)
You can use a cursor, and use the 'delete where current of' feature called positioned-delete. For more information:
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0000939.html
http://mysite.verizon.net/Graeme_Birchall/cookbook/DB2V97CK.PDF page 55