c-style for statement deprecated with a twist - swift

I've been coding for about 2 years, but I am still terrible at it. Any help would be much appreciated. I have been using the following code to set my background image parameters, after updating to Xcode 7.3 I got the warning 'C-Style statement is deprecated and will be removed':
for var totalHeight:CGFloat = 0; totalHeight < 2.0 * Configurations.sharedInstance.heightGame; totalHeight = totalHeight + backgroundImage.size.height {...}
Just to clarify, I have looked at a few other solutions/examples, I have noticed that one workaround is to use the for in loop, however, I just can't seem to wrap my head around this one and everything I have tried does not seem to work. Again, any help would be much appreciated.

A strategy that always works is to convert your for loop into a while loop along the lines of this pattern:
for a; b; c {
// do stuff
}
// can be written as:
a // set up
while b { // condition
// do stuff
c // post-loop action
}
So in this case, your for loop could be written as:
var totalHeight: CGFloat = 0
while totalHeight < 2.0 * Configurations.sharedInstance.heightGame {
// totalHeight = totalHeight + backgroundImage.size.height can be
// written slightly more succinctly as:
totalHeight += backgroundImage.size.height
}
But you're right, the preferred solution when possible is to use for in instead.
for in is a bit different to the C-style for or while. You don't control the loop variable directly yourself. Instead, the language will loop over any values produced by a "sequence". A sequence is any type that conforms to a protocol (SequenceType) that can create a generator that will serve that sequence up one by one. Lots of things are sequences – arrays, dictionaries, index ranges.
There's a kind of sequence called a stride that you could use to solve this particular problem using for in. Strides are a bit like ranges that increment more flexibly. You specify a "by" value that is the amount to vary by each time around:
for totalHeight in 0.stride(to: 2.0 * Configurations.sharedInstance.heightGame,
by: backgroundImage.size.height) {
// use totalHeight just the same as with the C-style for loop
}
Note, there are two ways of striding, to: (up to but not including, like if you'd used <), and through: (up to and including, like <=).
One of the benefits you get with a for in loop is that the loop variable doesn't need to be declared with var. Instead, each time around the loop you get a fresh new immutable (i.e. constant) variable, which can help avoid some subtle bugs, especially with closure variable capture.
You still need the while form occasionally (for example there's no built-in type that allows you to double a counter each time around), but for much everyday use there's a neat (and hopefully more readable) way of doing it without.

Might be best to go with a while loop:
var totalHeight: CGFloat = 0
while totalHeight < 2.0 * Configurations.sharedInstance.heightGame {
// Loop code goes here
totalHeight += backgroundImage.size.height
}

Related

"Appending" to an ArraySlice?

Say ...
you have about 20 Thing
very often, you do a complex calculation running through a loop of say 1000 items. The end result is a varying number around 20 each time
you don't know how many there will be until you run through the whole loop
you then want to quickly (and of course elegantly!) access the result set in many places
for performance reasons you don't want to just make a new array each time. note that unfortunately there's a differing amount so you can't just reuse the same array trivially.
What about ...
var thingsBacking = [Thing](repeating: Thing(), count: 100) // hard limit!
var things: ArraySlice<Thing> = []
func fatCalculation() {
var pin: Int = 0
// happily, no need to clean-out thingsBacking
for c in .. some huge loop {
... only some of the items (roughly 20 say) become the result
x = .. one of the result items
thingsBacking[pin] = Thing(... x, y, z )
pin += 1
}
// and then, magic of slices ...
things = thingsBacking[0..<pin]
(Then, you can do this anywhere... for t in things { .. } )
What I am wondering, is there a way you can call to an ArraySlice<Thing> to do that in one step - to "append to" an ArraySlice and avoid having to bother setting the length at the end?
So, something like this ..
things = ... set it to zero length
things.quasiAppend(x)
things.quasiAppend(x2)
things.quasiAppend(x3)
With no further effort, things now has a length of three and indeed the three items are already in the backing array.
I'm particularly interested in performance here (unusually!)
Another approach,
var thingsBacking = [Thing?](repeating: Thing(), count: 100) // hard limit!
and just set the first one after your data to nil as an end-marker. Again, you don't have to waste time zeroing. But the end marker is a nuisance.
Is there a more better way to solve this particular type of array-performance problem?
Based on MartinR's comments, it would seem that for the problem
the data points are incoming and
you don't know how many there will be until the last one (always less than a limit) and
you're having to redo the whole thing at high Hz
It would seem to be best to just:
(1) set up the array
var ra = [Thing](repeating: Thing(), count: 100) // hard limit!
(2) at the start of each run,
.removeAll(keepingCapacity: true)
(3) just go ahead and .append each one.
(4) you don't have to especially mark the end or set a length once finished.
It seems it will indeed then use the same array backing. And it of course "increases the length" as it were each time you append - and you can iterate happily at any time.
Slices - get lost!

Can this be more Swift3-like?

What I want to do is populate an Array (sequence) by appending in the elements of another Array (availableExercises), one by one. I want to do it one by one because the sequence has to hold a given number of items. The available exercises list is in nature finite, and I want to use its elements as many times as I want, as opposed to a multiple number of the available list total.
The current code included does exactly that and works. It is possible to just paste that in a Playground to see it at work.
My question is: Is there a better Swift3 way to achieve the same result? Although the code works, I'd like to not need the variable i. Swift3 allows for structured code like closures and I'm failing to see how I could use them better. It seems to me there would be a better structure for this which is just out of reach at the moment.
Here's the code:
import UIKit
let repTime = 20 //seconds
let restTime = 10 //seconds
let woDuration = 3 //minutes
let totalWOTime = woDuration * 60
let sessionTime = repTime + restTime
let totalSessions = totalWOTime / sessionTime
let availableExercises = ["push up","deep squat","burpee","HHSA plank"]
var sequence = [String]()
var i = 0
while sequence.count < totalSessions {
if i < availableExercises.count {
sequence.append(availableExercises[i])
i += 1
}
else { i = 0 }
}
sequence
You can overcome from i using modulo of sequence.count % availableExercises.count like this way.
var sequence = [String]()
while(sequence.count < totalSessions) {
let currentIndex = sequence.count % availableExercises.count
sequence.append(availableExercises[currentIndex])
}
print(sequence)
//["push up", "deep squat", "burpee", "HHSA plank", "push up", "deep squat"]
You can condense your logic by using map(_:) and the remainder operator %:
let sequence = (0..<totalSessions).map {
availableExercises[$0 % availableExercises.count]
}
map(_:) will iterate from 0 up to (but not including) totalSessions, and for each index, the corresponding element in availableExercises will be used in the result, with the remainder operator allowing you to 'wrap around' once you reach the end of availableExercises.
This also has the advantage of preallocating the resultant array (which map(_:) will do for you), preventing it from being needlessly re-allocated upon appending.
Personally, Nirav's solution is probably the best, but I can't help offering this solution, particularly because it demonstrates (pseudo-)infinite lazy sequences in Swift:
Array(
repeatElement(availableExercises, count: .max)
.joined()
.prefix(totalSessions))
If you just want to iterate over this, you of course don't need the Array(), you can leave the whole thing lazy. Wrapping it up in Array() just forces it to evaluate immediately ("strictly") and avoids the crazy BidirectionalSlice<FlattenBidirectionalCollection<Repeated<Array<String>>>> type.

Convert rnorm output of NumericVector with length of 1 to a double?

In the following code I am trying to generate a NumericVector of values from a normal distribution, where every time rnorm() is called each time with a different mean and variance.
Here is the code:
// [[Rcpp::export]]
NumericVector generate_ai(NumericVector log_var) {
int log_var_length = log_var.size();
NumericVector temp(log_var_length);
for(int i = 0; i < log_var_length; i++) {
temp[i] = rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]));
}
return(temp);
}
The line that is giving me trouble is this one:
temp[i] = rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]));
It is causing the error:
assigning to 'typename storage_type<14>::type' (aka 'double') from
incompatible type 'NumericVector' (aka 'Vector<14>')
Since I'm returning one number from rnorm, is there a way to convert this NumericVector return type to a double?
Rcpp provides two methods to access RNG sampling schemes. The first option is a single draw and the second enables n draws using some sweet sweet Rcpp sugar. Under your current setup, you are opting for the later setup.
Option 1. Use just the scalar sampling scheme instead of sugar by accessing the RNG function through R::, e.g.
temp[i] = R::rnorm(-0.5 * log_var[i], sqrt(log_var[i]));
Option 2. Use the subset operator on the NumericVector to obtain the only element.
// C++ indices start at 0 instead of 1
temp[i] = Rcpp::rnorm(1, -0.5 * log_var[i], sqrt(log_var[i]))[0];
The prior option will be faster and better. Why you might ask?
Well, Option 2 creates a new NumericVector, fills it with a call to Option 1, then requires a subset operation to retrieve the value before assigning it to the desired scalar.
In any case, RNG can be a bit confusing. Just make sure to always prefix the function call with the correct namespace (e.g. R:: or Rcpp::) so that you and perhaps future programmers avoid any ambiguity as to what kind of sampling scheme you've opted for.
(This is one of the downside of using namespace Rcpp;)

In swift which loop is faster `for` or `for-in`? Why?

Which loop should I use when have to be extremely aware of the time it takes to iterate over a large array.
Short answer
Don’t micro-optimize like this – any difference there is could be far outweighed by the speed of the operation you are performing inside the loop. If you truly think this loop is a performance bottleneck, perhaps you would be better served by using something like the accelerate framework – but only if profiling shows you that effort is truly worth it.
And don’t fight the language. Use for…in unless what you want to achieve cannot be expressed with for…in. These cases are rare. The benefit of for…in is that it’s incredibly hard to get it wrong. That is much more important. Prioritize correctness over speed. Clarity is important. You might even want to skip a for loop entirely and use map or reduce.
Longer Answer
For arrays, if you try them without the fastest compiler optimization, they perform identically, because they essentially do the same thing.
Presumably your for ;; loop looks something like this:
var sum = 0
for var i = 0; i < a.count; ++i {
sum += a[i]
}
and your for…in loop something like this:
for x in a {
sum += x
}
Let’s rewrite the for…in to show what is really going on under the covers:
var g = a.generate()
while let x = g.next() {
sum += x
}
And then let’s rewrite that for what a.generate() returns, and something like what the let is doing:
var g = IndexingGenerator<[Int]>(a)
var wrapped_x = g.next()
while wrapped_x != nil {
let x = wrapped_x!
sum += x
wrapped_x = g.next()
}
Here is what the implementation for IndexingGenerator<[Int]> might look like:
struct IndexingGeneratorArrayOfInt {
private let _seq: [Int]
var _idx: Int = 0
init(_ seq: [Int]) {
_seq = seq
}
mutating func generate() -> Int? {
if _idx != _seq.endIndex {
return _seq[_idx++]
}
else {
return nil
}
}
}
Wow, that’s a lot of code, surely it performs way slower than the regular for ;; loop!
Nope. Because while that might be what it is logically doing, the compiler has a lot of latitude to optimize. For example, note that IndexingGeneratorArrayOfInt is a struct not a class. This means it has no overhead over declaring the two member variables directly. It also means the compiler might be able to inline the code in generate – there is no indirection going on here, no overloaded methods and vtables or objc_MsgSend. Just some simple pointer arithmetic and deferencing. If you strip away all the syntax for the structs and method calls, you’ll find that what the for…in code ends up being is almost exactly the same as what the for ;; loop is doing.
for…in helps avoid performance errors
If, on the other hand, for the code given at the beginning, you switch compiler optimization to the faster setting, for…in appears to blow for ;; away. In some non-scientific tests I ran using XCTestCase.measureBlock, summing a large array of random numbers, it was an order of magnitude faster.
Why? Because of the use of count:
for var i = 0; i < a.count; ++i {
// ^-- calling a.count every time...
sum += a[i]
}
Maybe the optimizer could have fixed this for you, but in this case it hasn’t. If you pull the invariant out, it goes back to being the same as for…in in terms of speed:
let count = a.count
for var i = 0; i < count; ++i {
sum += a[i]
}
“Oh, I would definitely do that every time, so it doesn’t matter”. To which I say, really? Are you sure? Bet you forget sometimes.
But you want the even better news? Doing the same summation with reduce was (in my, again not very scientific, tests) even faster than the for loops:
let sum = a.reduce(0,+)
But it is also so much more expressive and readable (IMO), and allows you to use let to declare your result. Given that this should be your primary goal anyway, the speed is an added bonus. But hopefully the performance will give you an incentive to do it regardless.
This is just for arrays, but what about other collections? Of course this depends on the implementation but there’s a good reason to believe it would be faster for other collections like dictionaries, custom user-defined collections.
My reason for this would be that the author of the collection can implement an optimized version of generate, because they know exactly how the collection is being used. Suppose subscript lookup involves some calculation (such as pointer arithmetic in the case of an array - you have to add multiple the index by the value size then add that to the base pointer). In the case of generate, you know what is being done is to sequentially walk the collection, and therefore you could optimize for this (for example, in the case of an array, hold a pointer to the next element which you increment each time next is called). Same goes for specialized member versions of reduce or map.
This might even be why reduce is performing so well on arrays – who knows (you could stick a breakpoint on the function passed in if you wanted to try and find out). But it’s just another justification for using the language construct you should probably be using regardless.
Famously stated: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth. It seems unlikely that you are in the %3.
Focus on the bigger problem at hand. After it is working, if it needs a performance boost, then worry about for loops. But I guarantee you, in the end, bigger structural inefficiencies or poor algorithm choice will be the performance problem, not a for loop.
Worrying about for loops is oh so 1960s.
FWIW, a rudimentary playground test shows map() is about 10 times faster than for enumeration:
class SomeClass1 {
let value: UInt32 = arc4random_uniform(100)
}
class SomeClass2 {
let value: UInt32
init(value: UInt32) {
self.value = value
}
}
var someClass1s = [SomeClass1]()
for _ in 0..<1000 {
someClass1s.append(SomeClass1())
}
var someClass2s = [SomeClass2]()
let startTimeInterval1 = CFAbsoluteTimeGetCurrent()
someClass1s.map { someClass2s.append(SomeClass2(value: $0.value)) }
println("Time1: \(CFAbsoluteTimeGetCurrent() - startTimeInterval1)") // "Time1: 0.489435970783234"
var someMoreClass2s = [SomeClass2]()
let startTimeInterval2 = CFAbsoluteTimeGetCurrent()
for item in someClass1s { someMoreClass2s.append(SomeClass2(value: item.value)) }
println("Time2: \(CFAbsoluteTimeGetCurrent() - startTimeInterval2)") // "Time2 : 4.81457495689392"
The for (with a counter) is just incrementing a counter. Very fast. The for-in uses an iterator (call object to pass the next element). This is much slower. But finally you want to access the element in both cases wich will then make no difference in the end.

In what circumstances can a compiler change the execution order of programme statements?

If this is not a real question then feel free to close ;)
Not only the compiler can reorder execution (mostly for optimization), most modern processors do so, too. Read more about execution reordering and memory barriers.
The compiler can change the execution order of statements when it sees fit for optimization purposes, and when such changes wouldn't alter the observable behavior of the code.
A very simple example -
int func (int value)
{
int result = value*2;
if (value > 10)
{
return result;
}
else
{
return 0;
}
}
A naive compiler can generate code for this in exactly the sequence shown. First calculate "result" and return it only if the original value is larger than 10 (if it isn't, "result" would be ignored - calculated needlessly).
A sane compiler, though, would see that the calculation of "result" is only needed when "value" is larger than 10, so may easily move the calculation "value*2" inside the first braces and only do it if "value" is actually larger than 10 (needless to mention, the compiler doesn't really look at the C code when optimizing - it works in lower levels).
This is only a simple example. Much more complicated examples can be created. It is very possible that a C function would end up looking almost nothing like its C representation in compiled form, with aggressive enough optimizations.
Many compilers use something called "common subexpression elimination". For example, if you had the following code:
for(int i=0; i<100; i++) {
x += y * i * 15;
}
the compiler would notice that y * 15 is invariant (its value doesn't change). So it would compute y * 15, stick the result in a register and change the loop statement to "x += r0 * i". This is kind of a contrived example, but you often see expressions like this when working with array indexes or any other base + offset type of situation.