I'm using the build time with Timing summary of Xcode to see what file are long to compile and I notice that most file takes about 13 seconds to compile, and some longer. And example is a file that has only this:
import Foundation
enum PrefType: String {
case move = "move"
case backStory = "backStory"
case rightStory = "rightStory"
}
and this one takes 50 seconds... I don't get it.
This happens when I do a clean and build. Overall it takes around 7-8minutes. Otherwise with small change, it's 3-4 minutes... Which to me is still crazy long. Anyone has a clue?
I already turn on the new xcode build system and the recommended flag to speed up the process.. That didn't help much.
Related
I am trying to understand behind the scenes periods when writing codes to clicking on iPhone Screen. I made some enumerations to make question clear.
Time 1: I am writing codes on Xcode ( compile time )
Time 2: There is a syntax error or I forgot the override keyword ( compiler works on behind the
scene and say hey you have an error, fix this)
Time 3: I fixed the error, looks like there is no error. ( compile time )
Time 4: I pushed the Command + B and no error ( compile time - swift compiler converted my front end code to assembly code for my CPU and RAM
Time 5: Command R ( Run time )
Time 6: My app works fine. ( Run time )
Rather than pointing stack on compile time and heap on run time ( value types on stack - reference types on heap ) ,
Could you explain me what happens in these time periods which I enumerated ?
For example at the Time 3 - compile time. My machine or program know where String values or my custom methods ( value types ) knows where they will be put in memory exactly ? Yes it will be allocated in stack but as soon as I wrote code swift compiler convert to machine code and putting to the RAM stack side or I have some missing point ?
My second question which is related to first one, in Run time all value types memory allocated for stack ?
I almost read all resources so I have some background but I could not imagine what is going on exactly when I divide this time slots. Thanks
Just to resume a little :
Time 1 to Time 3 are editor time. What happens there is that the compiler is run only to check for syntax, variable initialization, parameters validity,... This is just ´precompilation’
Time 4 is compile time where machine code is generated, link who’s all needed libraries and frameworks.
Time 5 is installation, start (eventually debugger connection to running process)
Time 6 is application is running and interacting with user
I have code to solve a sudoku board using a recursive algorithm.
The problem is that, when this code is run in Xcode, it solves the algorithm in 0.1 seconds, and when it is run in playgrounds, where I need it, it takes almost one minute.
When run in iPad, it takes about 30 seconds, but still obviously nowhere near the time it takes in xcode.
Any help or ideas would be appreciated, thank you.
Playground try to get result of each your operation and print it out (repl style)
It just slow and laggy by itself
In Xcode you can compile your code with additional optimization that speedup your code a lot (e. g. Swift Beta performance: sorting arrays)
Source files compiles as separate module, so don't forget about public/open access modifiers.
To create source files:
I am fooling around with a few languages and I want to compare time it takes to perform some computation. I have troubles with proper time measurement in swift. I am trying solution from this answer but I get improper results-the exeution takes much longer when i run swift code.swift than after compilation and the results are telling me the opposite:
$ swiftc sort.swift -o csort
gonczor ~ Projects Learning Swift timers
$ ./csort
Swift: 27858 ns
gonczor ~ Projects Learning Swift timers
$ swift sort.swift
Swift: 22467 ns
This is the code:
iimport Dispatch
import CoreFoundation
var data = [some random integers]
func sort(data: inout [Int]){
for i in 0..<data.count{
for j in i..<data.count{
if data[i] > data[j]{
let tmp = data[i]
data[i] = data[j]
data[j] = tmp
}
}
}
}
// let start = DispatchTime.now()
// sort(data: &data)
// let stop = DispatchTime.now()
// let nanoTime = stop.uptimeNanoseconds - start.uptimeNanoseconds
// let nanoTimeDouble = Double(nanoTime) / 1_000_000_000
let startTime = clock()
sort(data: &data)
let endTime = clock()
print("Swift:\t \(endTime - startTime) ns")
Same happens when I change timer to clock() call or use CFAbsoluteTimeGetCurrent() and whether I compare 1000 or 5000 element array.
EDIT:
To be clearer. I know that pasting one run does not produce statistically meaningful results but the problem is that I see one approach takes significantly longer than the other and I am told something different.
EDIT2:
It seems I am still not expressing my problem clear enough. I have created a bash script to show the problem.
I am using time utility to check how much time it takes to execute command. Once again: I am only fooling around, I do not need statistically meaningful results. I am just wondering why the swift utilities tell me something different that I am experiencing.
Script:
#!/bin/bash
echo "swift sort.swift"
time swift sort.swift
echo "./cswift"
time ./csort
Result:
$ ./test.sh
swift sort.swift
Swift: 22651 ns
real 0m0.954s
user 0m0.845s
sys 0m0.098s
./cswift
Swift: 25388 ns
real 0m0.046s
user 0m0.033s
sys 0m0.008s
AS you can see the results using time show that it takes more or less 10 times longer to execute one command than another. And from the swift code I get info that it is more or less the same.
A couple of observations:
In terms of the best way to measure speed you can use Date or CFAbsoluteTimeGetCurrent, but you'll see that the documentation for those will warn you that
Repeated calls to this function do not guarantee monotonically increasing results.
This is effectively warning you that, in the unlikely event that there is an adjustment to the system's clock in the intervening period, the calculated elapsed time may not be entirely accurate.
It is advised to use mach_time if you need a great deal of accuracy when measuring the elapsed time. This involves some annoying CPU-specific adjustments (See Technical Q&A 1398.), but CACurrentMediaTime offers a simple alternative because it uses mach_time (which does not suffer this problem), but converts it to seconds to make it really easy to use.
The aforementioned notwithstanding, it seems that there is a more fundamental issue at play here: It looks like you're trying to reconcile a difference between to very different ways of running Swift code, namely:
swiftc hello.swift -o hello
time ./hello
and
time swift hello.swift
The former is compiling hello.swift into a stand alone executable. The latter is loading swift REPL, which then effectively interprets the Swift code.
This has nothing to do with the "proper" way to measure time. The time to execute the pre-compiled version should always be faster than invoking swift and passing it a source file. Not only is there more more overhead in invoking the latter, but the execution of the pre-compiled version is likely to be faster once execution starts, as well.
If you're really benchmarking the performance of running these routines, you should not rely on a single sort of 5000 items. I'd suggest sorting millions of items and repeating this multiple times and averaging the statistics. A single iteration of the sort is unsufficient to draw any meaningful conclusions.
Bottom line, you need to decide whether you want to benchmark just the execution of the code, but also the overhead of starting the REPL, too.
After upgrading specs/specs2 to the newest specs2 stable version I found out really weird issue in my scala tests. In my "main" class (all of my test classes extend this one) I have before and after
abstract class BaseSpec2 extends SpecificationWithJUnit with BeforeExample with AfterExample {
def before = startSqlTransaction
def after = rollbackSqlTransaction
[...]
}
before starts transaction and after ends it (I dont think I have to put code to show you how it works, If I am wrong please let me know).
I'm using JUnit in Eclipse to execute my tests in scala.
When I'm running them I get SqlError in some of them (the result of tests are not stable, what I mean is sometimes it ends up with success for one test, sometimes when this test won't go without error another will):
Deadlock found when trying to get lock; try restarting transaction
I think this error shows up, because before starts transaction before every tests but after does not close it for some reason. I debugged rollbackSqlTransaction and startSqlTransaction and it showed me that:
- When I start e.g 5 tests, transaction opens 5 times but closes only one time.
When I have added empty step beetwen those 5 tests everythink worked fine.
When I have more than 5 tests it goes even more weird, e.g transaction starts 4 times then it closes then starts then closes etc. With 40 tests it opened 40 times and closed 29 times (its not stable too).
In my opinion for some reason tests are running so fast that executer is not able to close transaction, but I might be wrong. Is there anythink I can put in my code to slow them down? Or am I wrong and doing somethink else wrong?
Also when I'm running my tests it seems like few of those tests are starting at the same time (they arent going one by one), but it might be just an ilusion in eclipse.
Thanks for an answer and sorry for my bad english.
I think that transactions are not being closed properly because examples are being executed concurrently. Just add sequential at the beginning of your specification and things should be fine.
Version 1.70 is acting strange. The delay is taking longer. Few days back it would take 20-26 seconds and then for every 5 compiling i would get 120 seconds. Now, it is taking 120 seconds on every compiling. I don't mind the wait. Maybe it has to do with more coding I recently added.
I don't think that it is related to v1.70, as there were no changes in this version related to the compilation process. Does it say: Optimized dexer or Standard dexer in the compilation window?