Does quicksort have a max? - quicksort

My quicksort code can only sort up to 999 numbers, is there a reason as to why, I've searched a lot but can't find a good answer, so I am really hoping someone can answer it properly.

If the quicksort works for small arrays but fails with large arrays, it's most likely due to stack overflow. Stack overflow can be avoided by recursing on the smaller sub-partition, then looping back for the larger sub-partition.
The question should post the actual code in order to determine if this is the actual cause of the problem. Failing at exactly 1000 (or more) numbers would be unusual.

Related

avoiding mod() by exploiting overflow or using if statements

For my project I need to cycle through an array of values. The number of elements and values are chosen at compile time. Currently I use mod to cycle through these values in various different ways (i.e not necessarily a simple i++).
However, I look up the cost of mod() and it seems that its an expensive function in most architectures including atmega Arduinos and my application is time sensitive.
I've come up with two potential solutions, both with pitfalls.
Overflow the index counter, exploiting that unsigned overflows to zero. This has the advantage of being very fast.
Disadvantages: I need exactly as many array elements as there are, unique bytes - at least 256. Also, the code is difficult to re-read since most ppl wouldn't assume I overflow on purpose.
An if statement that removes size_of_array whenever the index equals or exceeds it. The advantage is that size_of_array can whatever. Disadvantages: if statement is slower (how slow?).
In both situations edge cases that mod would deal correctly would not be encountered (i.e. taking the modulus of a very large number).
Are there any pitfalls to either solution that I have not thought of? Is there a better solution?

Matlab, fminunc gets stuck (i.e. doesn't return)

I have quite a large program, that essentially solve an optimization problem. I'm using fminunc for such purpose. For some reason however when the max amount of iterations is reached and the function is supposed to return it literally get stuck. I've tried to follow some of the suggestions which essentially where use nested function in order to avoid dynamic allocation etc (I was loading from a file everytime the cost function was called).
But still that doesn't seem to solve the problem.
Is there anything a bit more specific I should be aware of? Like some known issue that maybe I currently don't know.
Thank you.
(Let me know what kind of detail I can post).
Some more info:
The output is supposed to be an array of 15876 double, RAM is 32 GB. The actual set up is:
option = optimoptions(#fminunc,...
'Display','iter','GradObj','on','MaxIter',10,...
'ObjectiveLimit',10e-10,'Algorithm','quasi-newton');
I set the number of iterations low just to check if maybe it was the case the number of iterations was the issue, but it doesn't seem to be so. The output I'm getting is:
Solver stopped prematurely.
fminunc stopped because it exceeded the iteration limit,
options.MaxIterations = 10 (the selected value)
But it doesn't return.

Is QuickSort really the fastet sorting technique

Hello all this is my very first question here. I am new to datastructure and algorithms my teacher asked me to compare time complexity of different algorithms including: merge sort, heap sort, insertion sort, and quick sort. I search over internet and find out that quick sort is the fastest of all but my version of quick sort is the slowest of all (it sort 100 random integers in almost 1 second while my other sorting algorithms took almost 0 second). I tweak my quick sort logic many times (taking first value as pivot than tried to take middle value as pivot but in vain) I finally search the code over internet and there was not much difference in my code and code on internet. Now I really am confused that if this is behaviour of quick sort is natural (I mean whatever your logic is you are gonna get same results.) or there are some specific situations where you should use quick sort. In the end I know my question is not clear (I don't know how to ask besides my english is also not very good.) I hope someone can help me I really wanted to attach picture of awkward result I am having but I can't (reputation < 10).
Theoretically, quicksort is supposed to be the fastest algorithm for sorting, with a runtime of O(nlogn). It's worst case would be O(n^2), but only occurs if there are repeated values are equal to the pivot.
In your situation, I can only assume that your pivot value is not ideal in your data array, but is still able to sort the values using that pivot. Otherwise, your quicksort implementation is unfortunately incorrect.
Quicksort has O(n^2) worst-case runtime and O(nlogn) average case runtime. A good reason why Quicksort is so fast in practice compared to most other O(nlogn) algorithms such as Heapsort, is because it is relatively cache-efficient. Its running time is actually O(n/Blog(n/B)), where B is the block size. Heapsort, on the other hand, doesn't have any such speedup: it's not at all accessing memory cache-efficiently.
The value you choose as pivot may not be appropriate hence your sorting may be taking some time.You can avoid quicksort’s worst-case run time of O(n^2) almost entirely by using an appropriate choice of the pivot – such as picking it at random.
Also , the best and worst case often are extremes rarely occurring in practice.But any average case analysis assume some distribution of inputs. For sorting, the typical choice is the random permutation model (as assumed on Wikipedia).

example where quicksort crushes

I know that quicksort is not stable method, namely for equal elements, maybe member of array will not be placed at correct position, I need example of array (in which elements are repeated several times) quicksort does not work (need for example three way of partitioning method). I could not be able to find such example of array in internet and could you help me?
sure I can use other kind of sorting methods for this problem (like heap sort ,merge sort, etc), but my attitude is to know in real world example, what kind of data contains risk for quicksort, because as know it is one most useful method and is used often
Quicksort shouldn't crash no matter what array it is given.
When a sorting algorithm is called 'stable' or 'not stable', it does not refer to safety of the algorithm or whether or not it crashes. It is related to maintaining relative order of elements that have the same key.
As a brief example, if you have:
[9, 5, 7, 5, 1]
Then a 'stable' sorting algorithm should guarantee that in the sorted array the first 5 is still placed before the second 5. Even though for this trivial example there is no difference, there are examples in which it makes a difference, such as when sorting a table based on one column (you want the other columns to stay in the same order as before).
See more here: http://en.wikipedia.org/wiki/Stable_sort#Stability

Should I avoid recursion on the iPhone?

Should I avoid recursion with code that runs on the iPhone?
Or put another way, does anyone know the max stack size on the iphone?
Yes, avoiding recursion is a good thing on all embedded platforms.
Not only does it lowers or even removes the chance of a stack-overflow, it often gives you faster code as well.
You can always rewrite a recursive algorithm to be iterative. That's not always practical though (think quicksort). A way to get around this is to rewrite the algorithms in a way that the recursion depth is limited.
The introsort is a perfect example how it's done in practice. It limits the recursion depth of a quicksort to log2 (number-of-elements). So on a 32 bit machine you will never recurse deeper than 32.
http://en.wikipedia.org/wiki/Introsort
I've written quite a bit of software for embedded platforms in the past (car entertainment systems, phones, game-consoles and the like) and I always made sure that I put a upper limit on the recursion depth or avoided recursion at the first place.
As a result none of my programs ever died with a stack-overflow and most programs are happy with 32kb of stack. This pays off big time once you need multiple threads as each thread gets it's own stack.. You can save megabytes of memory that way.
I see a couple of answers that boil down to "don't use recursion". I disagree - it's not like the iPhone is some severely-constrained embedded system. If a problem is inherently recursive, feel free to express it that way.
Unless you're recursing to a stack depth of hundreds or thousands of frames, you'll never have an issue.
The max stack size on the iphone?
The iPhone runs a modified OSX in which every process is given a valid memory space, just like in most operating systems.
It's a full processor, so stack grows up, and heap grows down (or vice versa depending on your perspective). This means that you won't overflow the stack until you run out of memory allocated to your program.
It's best to avoid recursion when you can for stack and performance reasons (function calls are expensive, relative to simple loops), but in any case you should decide what limits you can place on recursive functions, and truncate them if they go too long.