Recursive Algorithm Is Much Slower in Playgrounds (1 minute) Than Xcode (0.1 seconds) - swift

I have code to solve a sudoku board using a recursive algorithm.
The problem is that, when this code is run in Xcode, it solves the algorithm in 0.1 seconds, and when it is run in playgrounds, where I need it, it takes almost one minute.
When run in iPad, it takes about 30 seconds, but still obviously nowhere near the time it takes in xcode.
Any help or ideas would be appreciated, thank you.

Playground try to get result of each your operation and print it out (repl style)
It just slow and laggy by itself
In Xcode you can compile your code with additional optimization that speedup your code a lot (e. g. Swift Beta performance: sorting arrays)
Source files compiles as separate module, so don't forget about public/open access modifiers.
To create source files:

Related

swiftc compile time is more slow when using -O than not using

I have studied swift compiler ( swiftc )
I just make one swift file written about sorting algorithms. ( radix, merge, quick, heap .. )
and then I compiled with or without optimization flags ( -O , -wmo ) and checked time by using flag (-driver-time-compilation)
⬇️ result1 - not using optimization flag
⬇️ result2 - using optimization flag.
but result1 was taken 0.3544 wall time. I think wall time is really taken time.
and result2 was taken 0.9037 wall time.
I think using optimization flag should be more faster than not using.
Can you help me understand why is this?
I want to reduce compile time only using swiftc.
The times you are showing are compilation times, not execution times.
Optimizations take time and the compiler has to work harder to complete them, it's completely normal that compilation takes longer when optimizing the code.
This is in general the intended behaviour, one small disadvantage is the larger executable size that can be produced, but that's generally not an issue

how to improve matlab code when the bottle neck is some MEX-file?

I'm not too familiar with matlab profiling tool. But since my matlab program takes too long to run I was advised to try profiling it. When I did it, I got the result that most of the running time (self time) is spent in a function that I didn't write in my code with the name: mupadmex(MEX-file) (which is called 17266 times). Is this means that I cannot do anything to improve the running time of my code?
Three solutions without knowing more to this problem
Reduce amount of call
Just try to remove the amount of call of this mex function, maybe some of the calls are redundant?
Know what it does
Maybe you know what that function does and you could just write a more efficient way to do the same thing?
Change that funciton
If you can get hands of the code of that mex function, you could try to improve it yourself.

Do scalac "-deprecation" and "-unchecked" compiler options make it slower

Anecdotally, our builds seem slower after enabling these options. I've searched online a bit and tried to do some comparisons but found nothing conclusive. Wondering if anyone knows offhand.
A great way to answer your own question is to try and measure it. For instance, I tried to compile with SBT (which gives the build time in seconds). I took a medium sized project (78 scala source files) that I tried to compile with and without the flags. I started by doing 3 clean/compile invocations to warm-up the disks (be sure that everything is cached properly by the controller and the OS). Then I measured 3 time the build time to get an average.
For both cases (with and without the flags), the build time was identical. However, it is interesting to note that the first warm-up build was really slow: almost 7x slower ! Therefore it is very difficult to rely on impressions, because the build time will be dominated by the way you access your source files.
Unless your desktop is a teletype with particularly slow electromechanical relay switches, you're safe - it does the same work either way, so if there were a difference it'd be in how long it takes to display the deprecation/unchecked warnings.

Optimized Dex Compiling

Version 1.70 is acting strange. The delay is taking longer. Few days back it would take 20-26 seconds and then for every 5 compiling i would get 120 seconds. Now, it is taking 120 seconds on every compiling. I don't mind the wait. Maybe it has to do with more coding I recently added.
I don't think that it is related to v1.70, as there were no changes in this version related to the compilation process. Does it say: Optimized dexer or Standard dexer in the compilation window?

Genetic Algorithm optimization - using the -O3 flag

Working on a problem that requires a GA. I have all of that working and spent quite a bit of time trimming the fat and optimizing the code before resorting to compiler optimization. Because the GA runs as a result of user input it has to find a solution within a reasonable period of time, otherwise the UI stalls and it just won't play well at all. I got this binary GA solving a 27 variable problem in about 0.1s on an iPhone 3GS.
In order to achieve this level of performance the entire GA was coded in C, not Objective-C.
In a search for a further reduction of run time I was considering the idea of using the "-O3" optimization switch only for the solver module. I tried it and it cut run time nearly in half.
Should I be concerned about any gotcha's by setting optimization to "-O3"? Keep in mind that I am doing this at the file level and not for the entire project.
-O3 flag will make the code work in the same way as before (only faster), as long as you don't do any tricky stuff that is unsafe or otherwise dependant on what the compiler does to it.
Also, as suggested in the comments, it might be a good idea to let the computation run in a separate thread to prevent the UI from locking up. That also gives you the flexibility to make the computation more expensive, or to display a progress bar, or whatever.
Tricky stuff
Optimisation will produce unexpected results if you try to access stuff on the stack directly, or move the stack pointer somewhere else, or if you do something inherently illegal, like forget to initialise a variable (some compilers (MinGW) will set them to 0).
E.g.
int main() {
int array[10];
array[-2] = 13; // some negative value might get the return address
}
Some other tricky stuff involves the optimiser stuffing up by itself. Here's an example of when -O3 completely breaks the code.