I am trying to understand behind the scenes periods when writing codes to clicking on iPhone Screen. I made some enumerations to make question clear.
Time 1: I am writing codes on Xcode ( compile time )
Time 2: There is a syntax error or I forgot the override keyword ( compiler works on behind the
scene and say hey you have an error, fix this)
Time 3: I fixed the error, looks like there is no error. ( compile time )
Time 4: I pushed the Command + B and no error ( compile time - swift compiler converted my front end code to assembly code for my CPU and RAM
Time 5: Command R ( Run time )
Time 6: My app works fine. ( Run time )
Rather than pointing stack on compile time and heap on run time ( value types on stack - reference types on heap ) ,
Could you explain me what happens in these time periods which I enumerated ?
For example at the Time 3 - compile time. My machine or program know where String values or my custom methods ( value types ) knows where they will be put in memory exactly ? Yes it will be allocated in stack but as soon as I wrote code swift compiler convert to machine code and putting to the RAM stack side or I have some missing point ?
My second question which is related to first one, in Run time all value types memory allocated for stack ?
I almost read all resources so I have some background but I could not imagine what is going on exactly when I divide this time slots. Thanks
Just to resume a little :
Time 1 to Time 3 are editor time. What happens there is that the compiler is run only to check for syntax, variable initialization, parameters validity,... This is just ´precompilation’
Time 4 is compile time where machine code is generated, link who’s all needed libraries and frameworks.
Time 5 is installation, start (eventually debugger connection to running process)
Time 6 is application is running and interacting with user
Related
When profiling code with cprofile, I noticed that there appear a lot of numba entries referring to the numba compiler. I know that the computation time is high at the first run of the script due to compiling.
The attached screenshot however shows cprofile entries from a script which already should be compiled (The script was run at least twice and also the total computation time decreased).
Because the functions shown in the screenshot still consume a lot of time, I asked myself if I do anything wrong using numba. Additionally, it does not make sense to call a compiler function if the code is already compiled. Is there a way to find out if compiled functions are already in storage?
I have studied swift compiler ( swiftc )
I just make one swift file written about sorting algorithms. ( radix, merge, quick, heap .. )
and then I compiled with or without optimization flags ( -O , -wmo ) and checked time by using flag (-driver-time-compilation)
⬇️ result1 - not using optimization flag
⬇️ result2 - using optimization flag.
but result1 was taken 0.3544 wall time. I think wall time is really taken time.
and result2 was taken 0.9037 wall time.
I think using optimization flag should be more faster than not using.
Can you help me understand why is this?
I want to reduce compile time only using swiftc.
The times you are showing are compilation times, not execution times.
Optimizations take time and the compiler has to work harder to complete them, it's completely normal that compilation takes longer when optimizing the code.
This is in general the intended behaviour, one small disadvantage is the larger executable size that can be produced, but that's generally not an issue
I'm using the build time with Timing summary of Xcode to see what file are long to compile and I notice that most file takes about 13 seconds to compile, and some longer. And example is a file that has only this:
import Foundation
enum PrefType: String {
case move = "move"
case backStory = "backStory"
case rightStory = "rightStory"
}
and this one takes 50 seconds... I don't get it.
This happens when I do a clean and build. Overall it takes around 7-8minutes. Otherwise with small change, it's 3-4 minutes... Which to me is still crazy long. Anyone has a clue?
I already turn on the new xcode build system and the recommended flag to speed up the process.. That didn't help much.
I have code to solve a sudoku board using a recursive algorithm.
The problem is that, when this code is run in Xcode, it solves the algorithm in 0.1 seconds, and when it is run in playgrounds, where I need it, it takes almost one minute.
When run in iPad, it takes about 30 seconds, but still obviously nowhere near the time it takes in xcode.
Any help or ideas would be appreciated, thank you.
Playground try to get result of each your operation and print it out (repl style)
It just slow and laggy by itself
In Xcode you can compile your code with additional optimization that speedup your code a lot (e. g. Swift Beta performance: sorting arrays)
Source files compiles as separate module, so don't forget about public/open access modifiers.
To create source files:
I'm working on a project based on the stm32f4discovery board using IAR Embedded Workbench (though I'm very close to the 32kb limit on the free version so I'll have to find something else soon). This is a learning project for me and so far I've been able to solve most of my issues with a few google searches and a lot of trial and error. But this is the first time I've encountered a run-time error that doesn't appear to be caused by a problem with my logic and I'm pretty stuck. Any general debugging strategy advice is welcome.
So here's what happens. I have an interrupt on a button; each time the button is pressed, the callback function runs my void cal_acc(uint16_t* data) function defined in stm32f4xx_it.c. This function gathers some data, and on the 6th press, it calls my void gn(float32_t* data, float32_t* beta) function. Eventually, two functions are called, gn_resids and gn_jacobian. The functions are very similar in structure. Both take in 3 pointers to 3 arrays of floats and then modify the values of the first array based on the second two. Unfortunately, when the second function gn_jacobian exits, I get the HardFault.
Please look at the link (code structure) for a picture showing how the program runs up to the fault.
Thank you very much! I appreciate any advice or guidance you can give me,
-Ben
Extra info that might be helpful below:
Running in debug mode, I can step into the function and run through all the lines click by click and it's OK. But as soon as I run the last line and it should exit and move on to the next line in the function where it was called, it crashes. I have also tried rearranging the order of the calls around this function and it is always this one that crashes.
I had been getting a similar crash on the first function gn_resids when one of the input pointers pointed to an array that was not defined as "static". But now all the arrays are static and I'm quite confused - especially since I can't tell what is different between the gn_resids function that works and the gn_jacobian function that does not work.
acc1beta is declared as a float array at the beginning of main.c and then also as extern float32_t acc1beta[6] at the top of stm32f4xx_it.c. I want it as a global variable; there is probably a better way to do this, but it's been working so far with many other variables defined in the same way.
Here's a screenshot of what I see when it crashes during debug (after I pause the session) IAR view at crash
EDIT: I changed the code of gn_step to look like this for a test so that it just runs gn_resids twice and it crashes as soon as it gets to the second call - I can't even step into it. gn_jacobian is not the problem.
void gn_step(float32_t* data, float32_t* beta) {
static float32_t resids[120];
gn_resids(resids, data, beta);
arm_matrix_instance_f32 R;
arm_mat_init_f32(&R, 120, 1, resids);
// static float32_t J_f32[720];
// gn_jacobian(J_f32, data, beta);
static float32_t J_f32[120];
gn_resids(J_f32, data, beta);
arm_matrix_instance_f32 J;
arm_mat_init_f32(&J, 120, 1, J_f32);
Hardfaults on Cortex M devices can be generated by various error conditions, for example:
Access of data outside valid memory
Invalid instructions
Division by zero
It is possible to gather information about the source of the hardfault by looking into some processor registers. IAR provides a debugger macro that helps to automate that process. It can be found in the IAR installation directory arm\config\debugger\ARM\vector_catch.mac. Please refer to this IAR Technical Note on Debugging Hardfaults for details on using this macro.
Depending on the type of the hardfault that occurs in your program you should try to narrow down the root cause within the debugger.