insertion sort on a singly linked list - insertion

Am i right in thinking that it is not possible to perform insertion sort on a singly linked list?
My reasoning: assuming that insertion sort by definition means that, as we move to the right in the outer loop, we move to the left in the inner loop and shift values up (to the right) as required and insert our current value when done with the inner loop. As a result an SLL cannot accomodate such an algorithm. Correct?

Well, I'd sound like the Captain Obvious, but the answer mostly depends on whether you're ok with keeping all iterations directed the same way as elements are linked and still implementing the proper sorting algorithm as per your definition. I don't really want to mess around your definition of insertion sorting, so I'm afraid you'd really have to think yourself. At least for a while. It's an homework anyway... ;)
Ok, here's what I got just before closing the page. You may iterate over an SLL in reversed direction, but this would take n*n/2 traversals to visit all the n elements. So you're theoretically okay with any traversal directions for your sorting loops. Guess it pretty much solves your question.

It is doable and is an interesting problem to explore.
The core of insertion sort algorithm is creating a sorted sequence with the first one element and extending it by adding new element and keeping the sequence is still sorted until it contains all the input data.
Singly linked list can not be traversed back, but you can always start from it's head to search the position for the new element.
The tricky part is when inserting node i before node j, you must handle their neighbor relationship well(I mean both node i and j's neighbor needs to be taken care of).

Here is my code. I hope it useful for you.
int insertSort(Node **pHead)
{
Node *current1 = (*pHead)->next;
Node *pre1 =*pHead;
Node *current2= *pHead;
Node *pre2=*pHead;
while(NULL!=current1)
{
pre2=*pHead;
current2=*pHead;
while((current2->data < current1->data))
{
pre2 = current2;
current2 = current2->next;
}
if(current2 != current1)
{
pre1->next=current1->next;
if(current2==*pHead)
{
current1->next=*pHead;
*pHead = current1;
}
else
{
pre2->next = current1;
current1->next = current2;
}
current1 = pre1->next;
}
else
{
pre1 = pre1->next;
current1 = current1->next;
}
}
return 0;
}

Related

Swift flood fill algorithm randomly stopping

Here is the raw code but I will explain it roughly:
func processTile(_ tile: Tile) {
// Add tile to pos list if valid, otherwise prune the
branch
if isBarrier(tile) || alreadyChecked(tile) {
return
} else {
posList.append(tile.pos)
}
depth++ ; if depth > maxDepth { return }
for neighborTile in neighbors(of: tilePos) {
processTile(neighborTile)
}
}
This is the main recursive function. Basically it checks if a tile is part of a room by being neither a barrier tile or already checked. If either one of those cases is true then the branch of the flood tree is pruned and ended. If the tile was valid and was added to the list then the depth is incremented with a check on it. Then the tile's neighbors are all checked with the same process.
For some reason the flood algorithm just stops randomly. I changed the code adding a print() in front of the returns to see why a branch just stops. This worked for all the correct pruning cases where it hits a barrier or already checked tile, but when it would just randomly stop and not check a tile it didn't even log anything.
The only thing I can think of is that Swift stops executing the functions after a certain level of recursion. If this is the case does anyone know what the maximum number of recursions is? And what a possible workaround would be. I'm thinking you would have to make a list of all the "edge" tiles of the flood, and a list of all the "interior" tiles of the flood. Then have a while loop running until there are no edge tiles which just expands the flood.

How can I calculate business/SLA hours with iterating over each second?

Before I spend a lot of time writing the only solution I can think of I was wondering if I'm doing it an inefficient way.
Once a support ticket is closed, a script is triggered, the script is passed an array of 'status-change-events' that happened from call open to close. So you might have 5 changes: new, open, active, stalled, resolved. Each one of these events has a timestamp associated with it.
What I need to do is calculate how much time the call was with us (new, open, active) and how much time it was with the customer (stalled). I also need to figure out how much of the 'us' time was within core hours 08:00 - 18:00 (and how many were non-core), and weekends/bank holidays count towards non-core hours.
My current idea is to for each status change, iterate over every second that occurred and check for core/non_core, and log it.
Here's some pseudo code:
time_since_last = ticket->creation_date
foreach events as event {
time_now = time_since_last
while (time_now < ticket->event_date) {
if ticket->status = stalled {
customer_fault_stalled++
} else {
work out if it was our fault or not
add to the appropriate counter etc
}
time_now++
}
}
Apologies if it's a little unclear, it's a fairly longwinded problem. Also I'm aware this may be slightly off of SO question guidelines, but I can't think of a better way of wording it and I need some advice before I spend days writing it this way.
I think you have the right idea, but recalculating the status of every ticket for every second of elapsed time will take a lot processing, and nothing will have changed for the vast majority of those one-second intervals
The way event simulations work, and the way I think you should write your application, is to create a list of all events where the status might change. So you will want to include all of the status change events for every ticket as well as the start and end of core time on all non-bank-holiday weekdays
That list of events is sorted by timestamp, after which you can just process each event as if your per-second counter has reached that time. The difference is that you no longer have to count through the many intervening seconds where nothing changes, and you should end up with a much more efficient application
I hope that's clear. You may find it easier to process each ticket separately, but the maximum gain will be achieved by processing all tickets simultaneously. You will still have a sorted sequence of events to process, but you will avoid having to reprocess the same core time start and end events over and over again
One more thing I noticed is that you can probably ignore any open status change events. I would guess that tickets either go from new to open and then active, or straight from new to resolved. So a switch between with your company and with the customer will never be made at an open event, and so they can be ignored. Please check this as I am only speaking from my intuition, and clearly know nothing about how your ticketing system has been designed
I would not iterate over the seconds. Depending on the cost of your calculations that could be quite costly. It would be better to calculate the borders between core/outside core.
use strict;
use warnings;
my $customer_time;
my $our_time_outside;
my $our_time_core;
foreach my $event ($ticket->events) {
my $current_ts = $event->start_ts;
while ($current_ts < $event->end_ts) {
if ($event->status eq 'stalled') {
$customer_time += $event->end_ts - $current_ts;
$current_ts = $event->end_ts;
}
elsif (is_core_hours($current_ts)) {
my $next_ts = min(end_of_core_hours($current_ts), $event->end_ts);
$our_time_core += $next_ts - $current_ts;
$current_ts = $next_ts;
}
else {
my $next_ts = min(start_of_core_hours($current_ts), $event->end_ts);
$our_time_outside += $next_ts - $current_ts;
$current_ts = $next_ts;
}
}
}
I can't see why you'd want to iterate over every second. That seems very wasteful.
Get a list of all of the events for a given ticket.
Add to the list any boundaries between core and non-core times.
Sort this list into chronological order.
For each consecutive pair of events in the list, subtract the later from the earlier to get a duration.
Add that duration to the appropriate bucket.
And the usual caveats for dealing with dates and times apply here:
Use a library (I recommend DateTime together with DateTime::Duration)
Convert all of your timestamps to UTC as soon as you get them. Only convert back to local time just before displaying them to the user.

ReactiveX (Rx) - Detecting Long Press Events

I am wondering what is the canonical approach to solve the following problem in Rx: Say I have two observables, mouse_down and mouse_up, whose elements represent mouse button presses. In a very simplistic scenario, if I wanted to detect a long press, I could do it the following way (in this case using RxPy, but conceptually the same in any Rx implementation):
mouse_long_press = mouse_down.delay(1000).take_until(mouse_up).repeat()
However, problems arise when we need to hoist some information from the mouse_down observable to the mouse_up observable. For example, consider if the elements of the observable stored information about which mouse button was pressed. Obviously, we would only want to pair mouse_down with mouse_up of the corresponding button. One solution that I came up with is this:
mouse_long_press = mouse_down.select_many(lambda x:
rx.Observable.just(x).delay(1000)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
If there is a more straight forward solution, I would love to hear it - but as far as I can tell this works. However, things get more complicated, if we also want to detect how far the mouse has moved between mouse_down and mouse_up. For this we need to introduce a new observable mouse_move, which carries information about the mouse position.
mouse_long_press = mouse_down.select_many(lambda x:
mouse_move.select(lambda z: distance(x, z) > 100).delay(1000)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
However, this is pretty much where I get stuck. Whenever a button is held down longer than 1 second, I get a bunch of boolean values. However, I only want to detect a long press when all of them are false, which sounds like the perfect case for the all operator. It feels like there's only a small step missing, but I haven't been able to figure out how to make it work so far. Perhaps I am also doing things in a backwards way. Looking forward to any suggestions.
Ok, I guess I found a possible answer. RxPy has a take_with_time operator, which works for this purpose. Not really as straight-forward as I was hoping for (not sure if the take_with_time is avaliable in other Rx implementations).
mouse_long_press = mouse_down.select_many(lambda x:
mouse_moves.take_with_time(1000).all(lambda z: distance(x, z) < 100)\
.take_until(mouse_up.where(lambda y: x.button == y.button))
)
I will leave the question open for now in case somebody has a better suggestion.
I'd approach the problem differently, by creating a stream of mouse presses with length information, and filtering that for presses longer than 1s.
First let's assume that you only have one mouse button. Merge the mouse_up and mouse_down streams and assign time intervals between them with the time_interval() operator. You will get a stream of intervals since previous event, along with the event itself. Assuming your mouse-ups and mouse-downs alternate, this means your events now are:
(down + time since last up), (up + time since last down), (down + time since last up) ...
Now, simply filter for x.value.type == "up" and x.interval > datetime.timedelta(seconds=1)
(You can also validate this with pairwise(), which always gives you the current + previous event, so you can check that the previous one is down and the current one is up.)
Second, add the mouse movement information, using the window() operator.
(This part is untested, I'm going off the docs of how it's supposed to behave, but the docs aren't very clear. So YMMV. )
The idea is that you can collect sequences of events from an observable, separated into groups based on another observable. From the docs:
The window_openings observable is going to be the merged up/down stream, or the interval stream, whichever is more convenient. Then you can flat_map() (or select_many, which is the same thing) the result and work out the distance in whichever way you like.
Again, you should end up with a stream of distances between up/down events. Then you can zip() this stream with the interval stream, at which point you can filter for up events and get both time and distance until the previous down.
Third, what if you are getting events for multiple mouse buttons?
Simply use group_by() operator to split into per-button streams and proceed as above.
Full code below:
Event = collections.NamedTuple("Event", "event interval distance")
def sum_distance(move_stream):
# put your distance calculation here; something like:
return move_stream.pairwise().reduce(lambda acc, (a, b): acc + distance(a, b), 0)
def mouse_press(updown_stream):
# shared stream for less duplication
shared = updown_stream.share()
intervals = shared.time_interval() # element is: (interval=timedelta, value=original event)
distances = mouse_move.window(shared).flat_map(sum_distance)
zipped = intervals.zip(distances, lambda i, d: \
Event(i.value, i.interval, d) )
mouse_long_press = (
# merge the mouse streams
rx.Observable.merge(mouse_up, mouse_down)
# separate into streams for each button
.group_by(lambda x: x.button)
# create per-button event streams per above and merge results back
.flat_map(mouse_press)
# filter by event type and length
.filter(lambda ev: ev.event.type == "up" and ev.interval >= datetime.timedelta(seconds=1)
)

Detect the last element when using NSFastEnumeration?

Is it possible to detect the last item when using NSFastEnumeration?
for(NSString *str in someArray){
//Can I detect if I'm up to the last string?
}
Is it possible to detect the last item
when using NSFastEnumeration?
Not with 100% accuracy (or by limiting the array contents to being entirely unique pointers so that pointer comparison works as discussed in another question) without also doing a bunch of work that leads to just doing it the old way.
Note that if you can target 4.0+, you can use enumerateWithBlock: that gives both the item and the index. It is as fast or faster than fast enumeration, even.
I think the only way is the old fashioned way, something like:
NSUInteger count = [someArray count];
for (NSString *str in someArray) {
if (--count==0) {
//this is the last element
}
}
At the end of the loop, "str" will still be pointing to the last element. What is it you need to do?

How to make the matrix caculating faster?

Hello guys:
I have to project many points before drawing them on a frame.
my codes are blow:
-(Coordination*)xyWorldToDev:(Coordination*)pt isIphoneYAxis:(BOOL)isIphoneYAxis{
CGPoint tmpPoint=CGPointApplyAffineTransform(CGPointMake(pt.x,pt.y),worldToDevMatrix);
Coordination *resultPoint=[[[Coordination alloc]initWithXY:tmpPoint.x withY:(isIphoneYAxis)?(sscy-tmpPoint.y):tmpPoint.y]autorelease];
return resultPoint; }
-(Coordination*)xyDevTo3D:(Coordination*)cPt{
double x=0.0,y=0.0;
double divide=1+m_cView3DPara.v*cPt.y;
x=(m_cView3DPara.a*cPt.x+m_cView3DPara.b*cPt.y+m_cView3DPara.e)/divide;
y=(m_cView3DPara.d*cPt.y+m_cView3DPara.f)/divide;
return [[[Coordination alloc]initWithXY:x withY:sscy-y]autorelease];
}
-(Coordination*)transformWorldTo3D:(Coordination*)pt{
return [self xyDevTo3D:[self xyWorldToDev:pt isIphoneYAxis:NO]];
}
Therefore,the method "-(Coordination*)transformWorldTo3D:(Coordination*)pt " is called hundreds times because projecting.
But i found it is very very SLOW while calling transformWorldTo3D!
Is there another way to accelerate it? Or using another framework which could caculate the projecting value faster?
Object allocations are expensive (relative to arithmetic operations); and it appears that you're doing 2 alloc-init-autorelease sequences for every point.
My first suggestion would be to try to do some of this work with CGPoints and avoid the allocations.
(Actually, that's my second suggestion: my first is to profile the code to see where the time is being spent.)