Which is more efficient: binary & or logical && - boolean

When all values are boolean doesn't the binary & operate on more bits than the logical &&?
For example
if ( foo == "Yes" & 2 != 0 & 6 )
or
if ( foo == "Yes" && 2 != 0 && 6 )
(thinking PHP specifically, but any language will do)

How many bits it operates on is irrelevant: the processor can do it in a single instruction anyway (normally). The crucial difference is the shortcut-ability of &&, so for anything on the right hand side that's not trivial to evaluate it is faster — assuming a language where && works this way, like C or Java. (And apparently PHP, though I know to little about it to be sure.)
On the other hand, the branch that this shortcut requires might also slow down things, however I'm quite sure nowadays' compilers are smart enough to optimize that away.

It depends on the logic of your code.

It's better to use && if && is logical and to use & when & is logical, that's why these operators are for. Don't optimize prematurely, because:
The speed of your application might be fine without optimization
You might optimize a part of your application and will win an irrelevant amount of speed increase
Your code will be more complicated and it will be difficult to work on later versions
If you have an issue with speed, it will probably not be related to the idea
Programming is quite difficult in itself sometimes and this kind of hack will make your code unreadable, your teammates will not understand your code and the overall efficiency of your team will be reduced because of ugly codes written by you

Related

Swift Micro optimization: multiple if statements where fall-though will not occur vs if else if

If there is a difference which of these techniques is results in faster execution and why?
Using if statements allows calculating numberOnePlusOne only once.
func test(numberOne:Int, numberTwo:Int) {
if numberOne == numberTwo {
return
}
let numberOnePlusOne = numberOne + 1
if numberOnePlusOne == numberTwo && numberOnePlusOne < numberTwo {
return
} else {
return
}
}
So if Using if else if is beneficial I would assume the benefit would be from the telling the compiler the blocks can't fall-though without an operation.
func test(numberOne:Int, numberTwo:Int) {
if numberOne == numberTwo {
return
} else if numberOne + 1 == numberTwo && numberOne + 1 < numberTwo {
return
} else {
return
}
}
If there is a difference which of these techniques is results in faster execution and why?
If you consider that LLVM bytecode is an intermediate language, the translation of which results in either a machine-specific machine language (e.g. compilation to x86 or ARM machine code) or behaviour (interpretation) then the answer to this should be clear:
Some machines are faster at some things than others. One might be faster at branching while another might devote that branch prediction circuitry to extra cache memory, instead.
Basically, the problem with your question is that you're asking for synthetic benchmarks, assumably so you can write code tailored to that premature optimisation. Instead you should be writing your code to be easily maintained and using a profiler to identify the most significant bottlenecks. Optimise based on that, then profile again to ensure you got it right. Find the next bottleneck, rinse, repeat and lather. There are multiple benefits to this approach:
Most importantly, you'll spend less time implementing the best optimisations; this is optimising your time.
You won't be pushing more significant optimisations out of the realm of possibility with micro-optimisation guesswork. Some optimisations can prevent others...
Your code will be much more legible.
I recommend you optimise like this per-system if possible. In this case, I highly doubt this is likely to be a significant bottleneck in any real code, and I have confidence that a quality compiler based on LLVM can optimise both of these to the same machine code... Consider that it can perform dead code analysis, ahead of time.
You are absolutely, absolutely on the wrong path. You are looking at micro optimisations. Micro optimisations sometimes save nanoseconds, but most of the time they don't. Real programmers look at real optimisations that save microseconds :-) Or sometimes make the difference between software being usable or not.
And you really, really need to read a book about Swift. What does your function actually do? On first sight, it seems it does nothing and can be just optimised away. On a closer look, if numberOne + 1 overflows then it will and must crash, because that's how the language is defined, unless numberOne == numberTwo. So any decent optimising Swift compiler will turn this into something like
if numberOne != numberTwo {
check_overflow (numberOne + 1);
}
And the first rule of optimisation is: If you didn't measure it, then don't optimise it. Because not measuring means you have no idea what you are doing, whether optimisation worked or not, and whether there was any need for optimisation.
A second rule for optimisation is this: Compiler engineers build optimisations into their compiler that help in real, useful code. They identify patterns of code that can be optimised. Nonsense code like yours won't fall into any of those patterns, so doesn't get optimised.

Are there performance reasons against goto? [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
GOTO, does it affect the performance while it is executed and run on the device?
Is it a good practice to use GOTO in objective C or is it bad practice to use it?
And, when is it a good choice to use GOTO statement?
Thanks.
A goto is simply a jump, so that its effect on performance is practically zero. It’s a bad practice because it harms code readability; you can mostly do without it. Some of the cases where it makes sense to use goto are described in previous questions, just search for goto.
Using a go to statement is usually a bad practice, especially in a object oriented language(where you can achieve the same purpose in an OO way easyly ), but not from a performance point of view but rather from the code readability point of view...
There is nothing in BAD and GOOD practice this is up to your requirement.
If you have same code which you want to execute you can say loop then you can use goto. Well here is a small example about this I think it would clear your doubt.
Declare any label name, here hello is label then you can call it using goto statement like this -
hello:
NSLog(#"Print hello!");
goto hello;
This would print 'Print hello!' again and again.
Not affecting the performance, just for a good structure and readability which are important features of professional programming. But sometimes, using goto may help to ease complexity in cases where the loop is too deep, but you want to jump out when certain condition is triggered. Even so, it can also be avoided in other ways.
In principle goto can affect performance simply by being present in the function.
The performance difference will almost always be unnoticeable, and there are a lot of things other than goto than can slightly perturb the optimizer and affect performance. But if you're interested you could examine the emitted code for differences.
It's a basic requirement in the emitted code that the same registers must be used for the same things at the source and target of the goto[*]. This constrains the register allocation when the compiler optimizes the code. Such constraints may have no effect at all, they may slow things down or cause additional code to be emitted. If they speed things up, it can only be by accident because the compiler's heuristics were in effect incorrect when applied to the unconstrained version.
The effect might be more pronounced for a computed goto (a GNU extension), where you can store a label in a variable and goto the variable. In that case, every possible target has to share a register state with every possible source.
What doesn't (normally) make a difference to performance is goto the start or end of a block vs. the equivalent break or continue or else. It's all the same to the optimizer: the compiler breaks your code down into so-called "basic blocks" with jumps and conditional jumps between them. It doesn't normally care whether the reason for the jump is a goto or not, and it has to get the register states right no matter which. This is why almost any programming construct can be described as "goto in disguise" by someone who's only thinking about the emitted instructions.
[*] to be more precise -- there could be an implicit zap at a goto, meaning that some register is used for one thing at the source and isn't used at all at the target. But you can't have some register that the target expects to contain a particular value (like the current value of a variable) and the source doesn't. So if that was the case before and then you add the goto, either the target needs to stop expecting it, or the source needs to put it there. Typically either one is going to require extra code to shuffle values between registers and stack.

Genetic Algorithm optimization - using the -O3 flag

Working on a problem that requires a GA. I have all of that working and spent quite a bit of time trimming the fat and optimizing the code before resorting to compiler optimization. Because the GA runs as a result of user input it has to find a solution within a reasonable period of time, otherwise the UI stalls and it just won't play well at all. I got this binary GA solving a 27 variable problem in about 0.1s on an iPhone 3GS.
In order to achieve this level of performance the entire GA was coded in C, not Objective-C.
In a search for a further reduction of run time I was considering the idea of using the "-O3" optimization switch only for the solver module. I tried it and it cut run time nearly in half.
Should I be concerned about any gotcha's by setting optimization to "-O3"? Keep in mind that I am doing this at the file level and not for the entire project.
-O3 flag will make the code work in the same way as before (only faster), as long as you don't do any tricky stuff that is unsafe or otherwise dependant on what the compiler does to it.
Also, as suggested in the comments, it might be a good idea to let the computation run in a separate thread to prevent the UI from locking up. That also gives you the flexibility to make the computation more expensive, or to display a progress bar, or whatever.
Tricky stuff
Optimisation will produce unexpected results if you try to access stuff on the stack directly, or move the stack pointer somewhere else, or if you do something inherently illegal, like forget to initialise a variable (some compilers (MinGW) will set them to 0).
E.g.
int main() {
int array[10];
array[-2] = 13; // some negative value might get the return address
}
Some other tricky stuff involves the optimiser stuffing up by itself. Here's an example of when -O3 completely breaks the code.

math function "Log ()"

Is there a way to implement the math function Log() into an equation with an iphone app? im working on right now that without the log function requires about 400 + if and else if statements. I have already wrote out these if and else if statements. half of them work and the other half keeps returning 0.00 . after HOURS of trying to find the problem and compare it to the part that does work i can still not figure it out. i know i can do simple math (add, subtract, divide, mulitply, etc), but i havnt found anything searching forums, google, etc that tells me how to add a Log function. if i could do this i can cut my code down from like 500 lines to about 40 tops. can anyone help me with this or at least point me in the right direction as to where i can find a thread or tutorial on this?
I already fixed Shaun's example code posted in his answer, but just to be perfectly clear, there were two issues at work:
First, he wasn't including <math.h> and using log from the system math library.
Second, he omitted the multiplication operator in the expression .19077f(logf(abNeckFactor)); we often write mathematics this way, but it is not syntactically valid C (or Obj-C) code.
Shaun, I would advise you to spend some more time reading up on what's available in C and Objective-C; an enormous amount of functionality is present in the language, and a few hours spent learning about what's already done for you will spend you many, many hours of searching for things in the future.
Note also that the log function is the base-e logarithm. Many people in technical fields outside of mathematics and computing use log to refer instead to the base-10 logarithm; if that's what you need, you will want to call log10( ) -- or log10f( ) -- instead.
#import <math.h>
// ...
double someLog = log(someDouble);
(Note: I used double/log here, but you might consider using float/logf if speed is an issue as they could be faster on the iPhone (IIRC).)
I know you've already got the answer to your question, but here's way of not having to write tons of if, else if, else if, .... else, for future reference.
Store the answers in an array.
For instance you could define an array of arrays that looks like this:
[(min, max, value),(min, max, value),...]. Then loop through the array.
arr = {{min, max, value}, {min, max, value}, ...}
for i in range(0, arr.length)
if(arr[i][min_i] < x && x < arr[i][max_i]) {return arr[val_i]}
There is probably a way figuring how to map the x to a index of an array like x/(max-min) so that log(x) would be arr[x/(max-min)]. But my point is, it is alot cleaner to have an array or dictionary and search through it than having a bunch of if, else if, else if, ... statements.

Does MATLAB perform tail call optimization?

I've recently learned Haskell, and am trying to carry the pure functional style over to my other code when possible. An important aspect of this is treating all variables as immutable, i.e. constants. In order to do so, many computations that would be implemented using loops in an imperative style have to be performed using recursion, which typically incurs a memory penalty due to the allocation a new stack frame for each function call. In the special case of a tail call (where the return value of a called function is immediately returned to the callee's caller), however, this penalty can be bypassed by a process called tail call optimization (in one method, this can be done by essentially replacing a call with a jmp after setting up the stack properly). Does MATLAB perform TCO by default, or is there a way to tell it to?
If I define a simple tail-recursive function:
function tailtest(n)
if n==0; feature memstats; return; end
tailtest(n-1);
end
and call it so that it will recurse quite deeply:
set(0,'RecursionLimit',10000);
tailtest(1000);
then it doesn't look as if stack frames are eating a lot of memory. However, if I make it recurse much deeper:
set(0,'RecursionLimit',10000);
tailtest(5000);
then (on my machine, today) MATLAB simply crashes: the process unceremoniously dies.
I don't think this is consistent with MATLAB doing any TCO; the case where a function tail-calls itself, only in one place, with no local variables other than a single argument, is just about as simple as anyone could hope for.
So: No, it appears that MATLAB does not do TCO at all, at least by default. I haven't (so far) looked for options that might enable it. I'd be surprised if there were any.
In cases where we don't blow out the stack, how much does recursion cost? See my comment to Bill Cheatham's answer: it looks like the time overhead is nontrivial but not insane.
... Except that Bill Cheatham deleted his answer after I left that comment. OK. So, I took a simple iterative implementation of the Fibonacci function and a simple tail-recursive one, doing essentially the same computation in both, and timed them both on fib(60). The recursive implementation took about 2.5 times longer to run than the iterative one. Of course the relative overhead will be smaller for functions that do more work than one addition and one subtraction per iteration.
(I also agree with delnan's sentiment: highly-recursive code of the sort that feels natural in Haskell is typically likely to be unidiomatic in MATLAB.)
There is a simple way to check this. Create this function tail_recursion_check:
function r = tail_recursion_check(n)
if n > 1
r = tail_recursion_check(n - 1);
else
error('error');
end
end
and run tail_recursion_check(10), for example. You are going to see a very long stack trace with 10 items that says error at line 3. If there were tail call optimization, you would only see one.