In what circumstances can a compiler change the execution order of programme statements? - compiler-optimization

If this is not a real question then feel free to close ;)

Not only the compiler can reorder execution (mostly for optimization), most modern processors do so, too. Read more about execution reordering and memory barriers.

The compiler can change the execution order of statements when it sees fit for optimization purposes, and when such changes wouldn't alter the observable behavior of the code.
A very simple example -
int func (int value)
{
int result = value*2;
if (value > 10)
{
return result;
}
else
{
return 0;
}
}
A naive compiler can generate code for this in exactly the sequence shown. First calculate "result" and return it only if the original value is larger than 10 (if it isn't, "result" would be ignored - calculated needlessly).
A sane compiler, though, would see that the calculation of "result" is only needed when "value" is larger than 10, so may easily move the calculation "value*2" inside the first braces and only do it if "value" is actually larger than 10 (needless to mention, the compiler doesn't really look at the C code when optimizing - it works in lower levels).
This is only a simple example. Much more complicated examples can be created. It is very possible that a C function would end up looking almost nothing like its C representation in compiled form, with aggressive enough optimizations.

Many compilers use something called "common subexpression elimination". For example, if you had the following code:
for(int i=0; i<100; i++) {
x += y * i * 15;
}
the compiler would notice that y * 15 is invariant (its value doesn't change). So it would compute y * 15, stick the result in a register and change the loop statement to "x += r0 * i". This is kind of a contrived example, but you often see expressions like this when working with array indexes or any other base + offset type of situation.

Related

Merge Sort algorithm efficiency

I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.

c-style for statement deprecated with a twist

I've been coding for about 2 years, but I am still terrible at it. Any help would be much appreciated. I have been using the following code to set my background image parameters, after updating to Xcode 7.3 I got the warning 'C-Style statement is deprecated and will be removed':
for var totalHeight:CGFloat = 0; totalHeight < 2.0 * Configurations.sharedInstance.heightGame; totalHeight = totalHeight + backgroundImage.size.height {...}
Just to clarify, I have looked at a few other solutions/examples, I have noticed that one workaround is to use the for in loop, however, I just can't seem to wrap my head around this one and everything I have tried does not seem to work. Again, any help would be much appreciated.
A strategy that always works is to convert your for loop into a while loop along the lines of this pattern:
for a; b; c {
// do stuff
}
// can be written as:
a // set up
while b { // condition
// do stuff
c // post-loop action
}
So in this case, your for loop could be written as:
var totalHeight: CGFloat = 0
while totalHeight < 2.0 * Configurations.sharedInstance.heightGame {
// totalHeight = totalHeight + backgroundImage.size.height can be
// written slightly more succinctly as:
totalHeight += backgroundImage.size.height
}
But you're right, the preferred solution when possible is to use for in instead.
for in is a bit different to the C-style for or while. You don't control the loop variable directly yourself. Instead, the language will loop over any values produced by a "sequence". A sequence is any type that conforms to a protocol (SequenceType) that can create a generator that will serve that sequence up one by one. Lots of things are sequences – arrays, dictionaries, index ranges.
There's a kind of sequence called a stride that you could use to solve this particular problem using for in. Strides are a bit like ranges that increment more flexibly. You specify a "by" value that is the amount to vary by each time around:
for totalHeight in 0.stride(to: 2.0 * Configurations.sharedInstance.heightGame,
by: backgroundImage.size.height) {
// use totalHeight just the same as with the C-style for loop
}
Note, there are two ways of striding, to: (up to but not including, like if you'd used <), and through: (up to and including, like <=).
One of the benefits you get with a for in loop is that the loop variable doesn't need to be declared with var. Instead, each time around the loop you get a fresh new immutable (i.e. constant) variable, which can help avoid some subtle bugs, especially with closure variable capture.
You still need the while form occasionally (for example there's no built-in type that allows you to double a counter each time around), but for much everyday use there's a neat (and hopefully more readable) way of doing it without.
Might be best to go with a while loop:
var totalHeight: CGFloat = 0
while totalHeight < 2.0 * Configurations.sharedInstance.heightGame {
// Loop code goes here
totalHeight += backgroundImage.size.height
}

In swift which loop is faster `for` or `for-in`? Why?

Which loop should I use when have to be extremely aware of the time it takes to iterate over a large array.
Short answer
Don’t micro-optimize like this – any difference there is could be far outweighed by the speed of the operation you are performing inside the loop. If you truly think this loop is a performance bottleneck, perhaps you would be better served by using something like the accelerate framework – but only if profiling shows you that effort is truly worth it.
And don’t fight the language. Use for…in unless what you want to achieve cannot be expressed with for…in. These cases are rare. The benefit of for…in is that it’s incredibly hard to get it wrong. That is much more important. Prioritize correctness over speed. Clarity is important. You might even want to skip a for loop entirely and use map or reduce.
Longer Answer
For arrays, if you try them without the fastest compiler optimization, they perform identically, because they essentially do the same thing.
Presumably your for ;; loop looks something like this:
var sum = 0
for var i = 0; i < a.count; ++i {
sum += a[i]
}
and your for…in loop something like this:
for x in a {
sum += x
}
Let’s rewrite the for…in to show what is really going on under the covers:
var g = a.generate()
while let x = g.next() {
sum += x
}
And then let’s rewrite that for what a.generate() returns, and something like what the let is doing:
var g = IndexingGenerator<[Int]>(a)
var wrapped_x = g.next()
while wrapped_x != nil {
let x = wrapped_x!
sum += x
wrapped_x = g.next()
}
Here is what the implementation for IndexingGenerator<[Int]> might look like:
struct IndexingGeneratorArrayOfInt {
private let _seq: [Int]
var _idx: Int = 0
init(_ seq: [Int]) {
_seq = seq
}
mutating func generate() -> Int? {
if _idx != _seq.endIndex {
return _seq[_idx++]
}
else {
return nil
}
}
}
Wow, that’s a lot of code, surely it performs way slower than the regular for ;; loop!
Nope. Because while that might be what it is logically doing, the compiler has a lot of latitude to optimize. For example, note that IndexingGeneratorArrayOfInt is a struct not a class. This means it has no overhead over declaring the two member variables directly. It also means the compiler might be able to inline the code in generate – there is no indirection going on here, no overloaded methods and vtables or objc_MsgSend. Just some simple pointer arithmetic and deferencing. If you strip away all the syntax for the structs and method calls, you’ll find that what the for…in code ends up being is almost exactly the same as what the for ;; loop is doing.
for…in helps avoid performance errors
If, on the other hand, for the code given at the beginning, you switch compiler optimization to the faster setting, for…in appears to blow for ;; away. In some non-scientific tests I ran using XCTestCase.measureBlock, summing a large array of random numbers, it was an order of magnitude faster.
Why? Because of the use of count:
for var i = 0; i < a.count; ++i {
// ^-- calling a.count every time...
sum += a[i]
}
Maybe the optimizer could have fixed this for you, but in this case it hasn’t. If you pull the invariant out, it goes back to being the same as for…in in terms of speed:
let count = a.count
for var i = 0; i < count; ++i {
sum += a[i]
}
“Oh, I would definitely do that every time, so it doesn’t matter”. To which I say, really? Are you sure? Bet you forget sometimes.
But you want the even better news? Doing the same summation with reduce was (in my, again not very scientific, tests) even faster than the for loops:
let sum = a.reduce(0,+)
But it is also so much more expressive and readable (IMO), and allows you to use let to declare your result. Given that this should be your primary goal anyway, the speed is an added bonus. But hopefully the performance will give you an incentive to do it regardless.
This is just for arrays, but what about other collections? Of course this depends on the implementation but there’s a good reason to believe it would be faster for other collections like dictionaries, custom user-defined collections.
My reason for this would be that the author of the collection can implement an optimized version of generate, because they know exactly how the collection is being used. Suppose subscript lookup involves some calculation (such as pointer arithmetic in the case of an array - you have to add multiple the index by the value size then add that to the base pointer). In the case of generate, you know what is being done is to sequentially walk the collection, and therefore you could optimize for this (for example, in the case of an array, hold a pointer to the next element which you increment each time next is called). Same goes for specialized member versions of reduce or map.
This might even be why reduce is performing so well on arrays – who knows (you could stick a breakpoint on the function passed in if you wanted to try and find out). But it’s just another justification for using the language construct you should probably be using regardless.
Famously stated: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth. It seems unlikely that you are in the %3.
Focus on the bigger problem at hand. After it is working, if it needs a performance boost, then worry about for loops. But I guarantee you, in the end, bigger structural inefficiencies or poor algorithm choice will be the performance problem, not a for loop.
Worrying about for loops is oh so 1960s.
FWIW, a rudimentary playground test shows map() is about 10 times faster than for enumeration:
class SomeClass1 {
let value: UInt32 = arc4random_uniform(100)
}
class SomeClass2 {
let value: UInt32
init(value: UInt32) {
self.value = value
}
}
var someClass1s = [SomeClass1]()
for _ in 0..<1000 {
someClass1s.append(SomeClass1())
}
var someClass2s = [SomeClass2]()
let startTimeInterval1 = CFAbsoluteTimeGetCurrent()
someClass1s.map { someClass2s.append(SomeClass2(value: $0.value)) }
println("Time1: \(CFAbsoluteTimeGetCurrent() - startTimeInterval1)") // "Time1: 0.489435970783234"
var someMoreClass2s = [SomeClass2]()
let startTimeInterval2 = CFAbsoluteTimeGetCurrent()
for item in someClass1s { someMoreClass2s.append(SomeClass2(value: item.value)) }
println("Time2: \(CFAbsoluteTimeGetCurrent() - startTimeInterval2)") // "Time2 : 4.81457495689392"
The for (with a counter) is just incrementing a counter. Very fast. The for-in uses an iterator (call object to pass the next element). This is much slower. But finally you want to access the element in both cases wich will then make no difference in the end.

Translating snippet to functional from imperative [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have the following Scala snippet. In order to solve my given problem, I "cheat" a little and use a var -- essentially a non-final, mutable data type. Its value is updated at each iteration through the loop. I've spent quite a bit of time trying to figure out how to do this using only recursion, and immutable data types and lists.
Original snippet:
def countChange_sort(money: Int, coins: List[Int]): Int =
if (coins.isEmpty || money < 0)
0
else if (coins.tail.isEmpty && money % coins.head != 0) {
0
} else if (coins.tail.isEmpty && money % coins.head == 0 || money == 0) {
1
} else {
-- redacted --
}
}
Essentially, are there any basic techniques I can use to eliminate the i and especially the accumulating cnt variables?
Thanks!!
There are lots of different ways to solve problems in functional style. Often you start by analysing the problem in a different way than you would when designing an imperative algorithm, so writing an imperative algorithm and then "converting" it to a functional one doesn't produce very natural functional algorithms (and you often miss out on lots of the potential benefits of functional style). But when you're an experienced imperative programmer just starting out with functional programming, that's all you've got, and it is a good way to begin getting your head around the new concepts. So here's how you can approach "converting" such a function as the one you wrote to functional style in a fairly uncreative way (i.e. not coming up with a different algorithm).
Lets just consider the else expression since the rest is fine.
Functional style has no loops, so if you need run a block of code a number of times (the body of the imperative loop), that block of code must be a function. Often the function is a simple non-recursive one, and you call a higher-order function such as map or fold to do the actual recursion, but I'm going to presume you need the practice thinking recursively and want to see it explicitly. The loop condition is calculated from the quantities you have at hand in the loop body, so we just have the loop-replacement function recursively invoke itself depending on exactly the same condition:
} else {
var cnt = 0
var i = 0
def loop(????) : ??? = {
if (money - (i * coins.head) > 0) {
cnt += countChange_sort(money - (i * coins.head), coins.tail)
i = i + 1
loop(????)
}
}
loop(????)
cnt
}
Information is only communicated to a function through its input arguments or through its definition, and communicated from a function through its return value.
The information that enters a function through its definition is constant when the function is created (either at compile time, or at runtime when the closure is created). Doesn't sound very useful for the information contained in cnt and i, which needs to be different on each call. So they obviously need to be passed in as arguments:
} else {
var cnt = 0
var i = 0
def loop(cnt : Int, i : Int) : ??? = {
if (money - (i * coins.head) > 0) {
cnt += countChange_sort(money - (i * coins.head), coins.tail)
i = i + 1
loop(cnt, i)
}
}
loop(cnt, i)
cnt
}
But we want to use the final value of cnt after the function call. If information is only communicated from loop through its return value, then we can only get the last value of cnt by having loop return it. That's pretty easy:
} else {
var cnt = 0
var i = 0
def loop(cnt : Int, i : Int) : Int = {
if (money - (i * coins.head) > 0) {
cnt += countChange_sort(money - (i * coins.head), coins.tail)
i = i + 1
loop(cnt, i)
} else {
cnt
}
}
cnt = loop(cnt, i)
cnt
}
coins, money, and countChange_sort are examples of information "entering a function through its definition". coins and money are even "variable", but they're constant at the point when loop is defined. If you wanted to move loop out of the body of countChange_sort to become a stand-alone function, you would have to make coins and money additional arguments; they would be passed in from the top-level call in countChange_sort, and then passed down unmodified in each recursive call inside loop. That would still make loop dependent on countChange_sort itself though (as well as the arithmetic operators * and -!), so you never really get away from having the function know about external things that don't come into it through its arguments.
Looking pretty good. But we're still using assignment statements inside loop, which isn't right. However all we do is assign new values to cnt and i and then pass them to a recursive invocation of loop, so those assignments can be easily removed:
} else {
var cnt = 0
var i = 0
def loop(cnt : Int, i : Int) : Int = {
if (money - (i * coins.head) > 0) {
loop(cnt + countChange_sort(money - (i * coins.head), coins.tail), i + 1)
} else {
cnt
}
}
cnt = loop(cnt, i)
cnt
}
Now there are some obvious simplifications, because we're not really doing anything at all with the mutable cnt and i other than initialising them, and then passing their initial value, assigning to cnt once and then immediately returning it. So we can (finally) get rid of the mutable cnt and i entirely:
} else {
def loop(cnt : Int, i : Int) : Int = {
if (money - (i * coins.head) > 0) {
loop(cnt + countChange_sort(money - (i * coins.head), coins.tail), i + 1)
} else {
cnt
}
}
loop(0, 0)
}
And we're done! No side effects in sight!
Note that I haven't thought much at all about what your algorithm actually does (I have made no attempt to even figure out whether it's actually correct, though I presume it is). All I've done is straightforwardly applied the general principle that information only enters a function through its arguments and leaves through its return values; all mutable state accessible to an expression is really extra hidden inputs and hidden outputs of the expression. Making them immutable explicit inputs and outputs, and then allows you to prune away unneeded ones. For example, i doesn't need to be included in the return value from loop because it's not actually needed by anything, so the conversion to functional style has made it clear that it's purely internal to loop, whereas you had to actually read the code of the imperative version to deduce this.
cnt and i are what is known as accumulators. Accumulators aren't anything special, they're just ordinary arguments; the term only refers to how they are used. Basically, if your algorithm needs to keep track of some data as it goes, you can introduce an accumulator parameter so that each recursive call can "pass forward" the data from what has been done so far. They often fill the role that local temporary mutable variables fill in imperative algorithms.
It's quite a common pattern for the return value of a recursive function to be the value of an accumulator parameter once it is determined that there's no more work left to do, as happens with cnt in this case.
Note that these sort of techniques don't necessarily produce good functional code, but it's very easy to convert functions implemented using "local" mutable state to functional style using this technique. Pervasive non-local use of mutability, such as is typical of most traditional OO programs, is harder to convert like this; you can do it, but you tend to have to modify the entire program at once, and the resulting functions have large numbers of extra arguments (explicitly exposing all the hidden data-flow that was present in original program).
I don't have any basic techniques to change the code you have specifically. However, here is a general tip for solving recursion algorithms:
Can you break the problem into sub-problems? In the money example, for example, if you are trying to get to $10 with a $5, that's similar to the question of getting to $5 with a $5 (having already chosen the $5 once). Try to draw it out and make rules. You'll be surprised at how much more obviously correct your solution is.
Since nobody answers your question I will try to give you some hints:
What is a loop?
Traversing each element of a collection. stop meeting a condition
What can you do with recursion:
Traversing each element of a collection. stop meeting a condition.
Start simple write a method without vars which prints each element of a collection.
Then the rest becomes simple look at your loop and what you are doing.
Instead of manipulating the variables directly(like i=i + 1), simply pass i + 1 to the recursive call of your method.
HTH

Which costs more while looping; assignment or an if-statement?

Consider the following 2 scenarios:
boolean b = false;
int i = 0;
while(i++ < 5) {
b = true;
}
OR
boolean b = false;
int i = 0;
while(i++ < 5) {
if(!b) {
b = true;
}
}
Which is more "costly" to do? If the answer depends on used language/compiler, please provide. My main programming language is Java.
Please do not ask questions like why would I want to do either.. They're just barebone examples that point out the relevant: should a variable be set the same value in a loop over and over again or should it be tested on every loop that it holds a value needed to change?
Please do not forget the rules of Optimization Club.
The first rule of Optimization Club is, you do not Optimize.
The second rule of Optimization Club is, you do not Optimize without measuring.
If your app is running faster than the underlying transport protocol, the optimization is over.
One factor at a time.
No marketroids, no marketroid schedules.
Testing will go on as long as it has to.
If this is your first night at Optimization Club, you have to write a test case.
It seems that you have broken rule 2. You have no measurement. If you really want to know, you'll answer the question yourself by setting up a test that runs scenario A against scenario B and finds the answer. There are so many differences between different environments, we can't answer.
Have you tested this? Working on a Linux system, I put your first example in a file called LoopTestNoIf.java and your second in a file called LoopTestWithIf.java, wrapped a main function and class around each of them, compiled, and then ran with this bash script:
#!/bin/bash
function run_test {
iter=0
while [ $iter -lt 100 ]
do
java $1
let iter=iter+1
done
}
time run_test LoopTestNoIf
time run_test LoopTestWithIf
The results were:
real 0m10.358s
user 0m4.349s
sys 0m1.159s
real 0m10.339s
user 0m4.299s
sys 0m1.178s
Showing that having the if makes it slight faster on my system.
Are you trying to find out if doing the assignment each loop is faster in total run time than doing a check each loop and only assigning once on satisfaction of the test condition?
In the above example I would guess that the first is faster. You perform 5 assignments. In the latter you perform 5 test and then an assignment.
But you'll need to up the iteration count and throw in some stopwatch timers to know for sure.
Actually, this is the question I was interested in… (I hoped that I’ll find the answer somewhere to avoid own testing. Well, I didn’t…)
To be sure that your (mine) test is valid, you (I) have to do enough iterations to get enough data. Each iteration must be “long” enough (I mean the time scale) to show the true difference. I’ve found out that even one billion iterations are not enough to fit to time interval that would be long enough… So I wrote this test:
for (int k = 0; k < 1000; ++k)
{
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // to slow down a bit
b = true;
a /= 2; // to slow down a bit more
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tasgn\t" + time);
}
{
long stopwatch = System.nanoTime();
boolean b = false;
int i = 0, j = 0;
while (i++ < 1000000)
while (j++ < 1000000)
{
int a = i * j; // the same thing as above
if (!b)
{
b = true;
}
a /= 2;
}
long time = System.nanoTime() - stopwatch;
System.out.println("\\tif\t" + time);
}
}
I ran the test three times storing the data in Excel, then I swapped the first (‘asgn’) and second (‘if’) case and ran it three times again… And the result? Four times “won” the ‘if’ case and two times the ‘asgn’ appeared to be the better case. This shows how sensitive the execution might be. But in general, I hope that this has also proven that the ‘if’ case is better choice.
Thanks, anyway…
Any compiler (except, perhaps, in debug) will optimize both these statements to
bool b = true;
But generally, relative speed of assignment and branch depend on processor architecture, and not on compiler. A modern, super-scalar processor perform horribly on branches. A simple micro-controller uses roughly the same number of cycles per any instruction.
Relative to your barebones example (and perhaps your real application):
boolean b = false;
// .. other stuff, might change b
int i = 0;
// .. other stuff, might change i
b |= i < 5;
while(i++ < 5) {
// .. stuff with i, possibly stuff with b, but no assignment to b
}
problem solved?
But really - it's going to be a question of the cost of your test (generally more than just if (boolean)) and the cost of your assignment (generally more than just primitive = x). If the test/assignment is expensive or your loop is long enough or you have high enough performance demands, you might want to break it into two parts - but all of those criteria require that you test how things perform. Of course, if your requirements are more demanding (say, b can flip back and forth), you might require a more complex solution.