Which one is more performant : using Swift Array.contains function OR checking with if (.. || .. || .. )? - swift

In our Swift code we're often writing this when we want to check if foo is a particular value :
if [kConstantOne, kConstant2, kConstant3].contains(foo) {...}
I was wondering how this compared, performance wise, to a normal if statement where you compare the values with ||.
if kConstantOne == foo || kConstant2 == foo || kConstant3 == foo {...}
Does the compiler optimises this statement or is an array effectively allocated, instantiated and looped to check for equality?
It's an easier, a bit more fancy way to write these simple if statements, but if there would be a significant performance impact, which I seriously doubt it will, we should try to avoid it.
EDIT : A single isolated use of this would not have any impact, but I'm more interested to know what happens when it would be part of a larger algorithm. What happens when your code is hitting this statement or other similar ones a few thousand times, allocating and initialising an array and using the contains function for equality checking.

If you are concerned about the performance impact, the only way to explore this is to benchmark it in code that you believe would cause the performance impact. While we can do some reasoning about current compiler implementation details, this is no way to translate those into answering questions of "significant performance impact" and in many cases your intuition will be wrong (map is very slightly faster than a simple for loop in most cases, which is counter-intuitive until you read the implementation of map; but it's a very tiny difference).
Write the code clearly to say what you mean. The first does that.
It is possible that the if statement is very slightly faster than the contains and allows some compiler optimizations (that may or may not actually occur) that contains does not. It definitely does not create a temporary array or anything like that. However, this is going to be nearly unmeasurable over such a tiny array either way. If this is part of an inner loop that is called a few tens of millions of times, I have some approaches I would explore to optimize it (which wouldn't look like either of these; I'd focus first on getting rid of the == if these aren't integers). If this is called fewer than a million times, then you're more likely to accidentally hurt performance than help it by micro-optimizing like this. You're definitely likely to hurt maintainability.

Related

Is GameObject.FindObjectOfType() less expensive in performance than GameObject.Find() in Unity?

I'm on my last finishing touches with my indie game using Unity and I've been thinking about improving my game's performance. I was wondering if GameObject.FindObjectOfType() is less expensive than GameObject.Find()? Thanks!
If you really care about performance, try using none of them - or at least minimise it.
Using these Methods will loop through the list of GameObjects and return the object and because of the looping it is pretty heavy on the performance. So if you use them, never call them in the Update()-Method, call them in Start() or any method that doesn't get called often and store the value.
To be honest I don't know, which one is faster. If I had to guess, I would say it is GameObject.Find(), since it's only checking the name, whereas FindObjectOfType() checks the components.
But even if I would consider using FindObjectOfType(), because Find() uses a string and you might want to avoid that, because of typos (if you are not storing it inside a single class and just reference the variable)

Drools conditions in the consequence: Benefits or Costs?

I am new to drools with a background in java. I have gained a basic undertanding of drools.
I have inherited a large drools project which works, but appears to be hacked together. Most of the rules have many and nested IF and ELSE statements in the "then" (consequence?). I believe this is bad practice. Can anyone confirm, references to materials on the internet would be useful.
Also what are the benefits in correcting this other than readability?
I'd have to dig for references but the general consent among rule programmers is that decisions should be made on the LHS/condition part of a rule. The simple reason is that the Engine is dedicated to the process of "many pattern/many object" pattern match problem. Even if there is one final condition where some action is necessary for both, true and false, Drools syntax provides a good solution, i.e., extending a rule twice, once with the positive and once with the negative condition.
That said, an occasional conditional statement may be tolerated or even fine, e.g., when it merely distinguishes between details in the works of the RHS/consequence part of a rule. But "many and nested" sounds rather bad - but maybe this "distinction of the details" does require such logic.
As for the benefits: nobody can tell without inspection, and then it'll need an experienced judge.
Since you've asked for references: http://www.redhat.com/rhecm/rest-rhecm/jcr/repository/collaboration/sites%20content/live/redhat/web-cabinet/home/resourcelibrary/whitepapers/brms-design-patterns/rh:pdfFile.pdf
One non-readability reasoning for putting as much evaluation in the LHS as possible is performance. For one thing it avoids unnecessary rule activations, but it also makes significant performance gains through caching the results of each of the matches, thereby avoiding re-evaluation.
This caching is not available to conditional logic on the RHS.
This is one reason why you invoke update/modify on a fact when you change it. This effectively instructs the engine that previously cached LHS evaluations relating to that fact can be discarded.
Nested "if/else" statements are generally not bad practice as logical operations are not stressful on the CPU, nor does it require copious amounts of memory.
I'm not a Drools programmer, but I imagine, since you alluded it to Java, it's an obj-oriented high-level prgramming language.
Howver, in the event that multiple if statements can be combined, it's recommended to do so for clear, clean coding, not necessarily performance.

Streams vs. tail recursion for iterative processes

This is a follow-up to my previous question.
I understand that we can use streams to generate an approximation of 'pi' (and other numbers), n-th fibonacci, etc. However I doubt if streams is the right approach to do that.
The main drawback (as I see it) is memory consumption: e.g. stream will retains all fibonacci numbers for i < n while I need only fibonacci n-th. Of course, I can use drop but it makes the solution a bit more complicated. The tail recursion looks like a more suitable approach to the tasks like that.
What do you think?
If need to go fast, travel light. That means; avoid allocation of any unneccessary memory. If you need memory, use the fastast collections available. If you know how much memory you need; preallocate. Allocation is the absolute performance killer... for calculation. Your code may not look nice anymore, but it will go fast.
However, if you're working with IO (disk, network) or any user interaction then allocation pales. It's then better to shift priority from code performance to maintainability.
Use Iterator. It does not retain intermediate values.
If you want n-th fibonacci number and use a stream just as a temporary data structure (if you do not hold references to previously computed elements of stream) then your algorithm would run in constant space.
Previously computed elements of a Stream (which are not used anymore) are going to be garbage collected. And as they were allocated in the youngest generation and immediately collected, allmost all allocations might be in cache.
Update:
It seems that the current implementation of Stream is not as space-efficient as it may be, mainly because it inherits an implementation of apply method from LinearSeqOptimized trait, where it is defined as
def apply(n: Int): A = {
val rest = drop(n)
if (n < 0 || rest.isEmpty) throw new IndexOutOfBoundsException("" + n)
rest.head
}
Reference to a head of a stream is hold here by this and prevents the stream from being gc'ed. So combination of drop and head methods (as in f.drop(100).head) may be better for situations where dropping intermediate results is feasible. (thanks to Sebastien Bocq for explaining this stuff on scala-user).

Objective-C sparse array redux

First off, I've seen this, but it doesn't quite seem to suit my needs.
I've got a situation where I need a sparse array. Some situations where I could have, say 3000 potential entries with only 20 allocated, other situations where I could have most or all of the 3000 allocated. Using an NSMutableDictionary (with NSString representations of the integer index values) would appear to work well for the first case, but would seemingly be inefficient for the second, both in storage and lookup speed. Using an NSMutableArray with NSNull objects for the empty entries would work fairly well for the second case, but it seems a bit wasteful (and it could produce an annoying delay at the UI) to insert most of 3000 NSNull entries for the first case.
The referenced article mentions using an NSMapTable, since it supposedly allows integer keys, but apparently that class is not available on iPhone (and I'm not sure I like having an object that doesn't retain, either).
So, is there another option?
Added 9/22
I've been looking at a custom class that embeds an NSMutableSet, with set entries consisting of a custom class with integer (ie, element#) and element pointer, and written to mimic an NSMutableArray in terms of adds/updates/finds (but not inserts/removals). This seems to be the most reasonable approach.
A NSMutableDictionary probably will not be slow, dictionaries generally use hashing and are rather fast, bench mark.
Another option is a C array of pointers. Allocation a large array only allocates virtual memory until the real memory is accessed (cure calloc, not malloc, memset). The downside is that memory is allocated in 4KB pages which can be wasteful for small numbers of entries, for large numbers of entries many may fall in the same page.
What about CFDictionary (or actually CFMutableDictionary)? In the documentation, it says that you can use any C data type as a key, so perhaps that would be closer to what you need?
I've got the custom class going and it works pretty well so far. It's 322 lines of code in the h+m files, including the inner class stuff, a lot of blank lines, comments, description formatter (currently giving me more trouble than anything else) and some LRU management code unrelated to the basic concept. Performance-wise it seems to be working faster than another scheme I had that only allowed "sparseness" on the tail end, presumably because I was able to eliminate a lot of special-case logic.
One nice thing about the approach was that I could make much of the API identical to NSMutableArray, so I only needed to change maybe 25% of the lines that somehow reference the class.
I also needed a sparse array and have put mine on git hub.
If you need a sparse array feel free to grab https://github.com/LavaSlider/DSSparseArray

Cocoa Touch Programming. KVO/KVC in the inner loop is super slow. How do I speed things up?

I've become a huge fan of KVO/KVC. I love the way it keeps my MVC architecture clean. However I am not in love with the huge performance hit I incur when I use KVO within the inner rendering loop of the 3D rendering app I'm designing where messages will fire at 60 times per second for each object under observation - potentially hundreds.
What are the tips and tricks for speeding up KVO? Specifically, I am observing a scalar value - not an object - so perhaps the wrapping/unwrapping is killing me. I am also setting up and tearing down observation
[foo addObserver:bar forKeyPath:#"fooKey" options:0 context:NULL];
[foo removeObserver:bar forKeyPath:#"fooKey"];
within the inner loop. Perhaps I'm taking a hit for that.
I really, really, want to keep the huge flexibility KVO provides me. Any speed freaks out there who can lend a hand?
Cheers,
Doug
Objective-C's message dispatch and other features are tuned and pretty fast for what they provide, but they still don't approach the potential of tuned C for computational tasks:
NSNumber *a = [NSNumber numberWithIntegerValue:(b.integerValue + c.integerValue)];
is way slower than:
NSInteger a = b + c;
and nobody actually does math on objects in Objective-C for that reason (well that and the syntax is awful).
The power of Objective-C is that you have a nice expressive message based object system where you can throw away the expensive bits and use pure C when you need to. KVO is one of the expensive bits. I love KVO, I use it all the time. It is computationally expensive, especially when you have lots of observed objects.
An inner loop is that small bit of code you run over and over, anything thing there will be done over and over. It is the place where you should be eliminating OOP features if need be, where you should not be allocating memory, where you should be considering replacing method calls with static inline functions. Even if you somehow manage to get acceptable performance in your rendering loop, it will be much lower performance than if you got all that expensive notification and dispatch logic out of there.
If you really want to try to keep it going with KVO here are a few things you can try to make things go faster:
Switch from automatic to manual KVO in your objects. This may allow you to reduce spurious notifications
Aggregate updates: If your intermediate values over some time interval are not relevant, and you can defer for some amount of time (like the next animation frame) don't post the change, mark that the change needs to posted and wait for a the relevent timer to go off, you might get to avoid a bunch of short lived intermediary updates. You might also use some sort of proxy to aggregate related changes between multiple objects.
Merge observable properties: If you have a large number of properties in one type of object that might change you may be better off making a single "hasChanges" property observe and having the the observer query the properties.