Optimizing A* Pathfinding iPhone - Will NSDictionary do the trick? - iphone

I've got a pretty big A* pathfinding function that gets called frequently and has to be put in another thread because otherwise it will make my game stutter. I come from a Java background, and recently read a discussion about the speed of HashMap's (essentially the equivalent of NSDictionary) and the different implementations you can use. I'm curious how fast NSDictionary is and whether anyone has found it to be viable option for dealing with lots of immediate and temporary object allocations, or whether it's too slow for that.
Currently I'm using an NSMutableArray for the open and closed lists in the A* algorithm - I would be replacing the closed list with NSMutableDictionary due to the O(1) setObject:forKey and removeObject:forKey, and also creating an NSMutableDictionary that "mirrors" the open list. The pathing data is stored in a big NSMutableArray - I would leave this as-is because index access is fast enough (of course).
So my question is... would this be a noticeable speed improvement or should I roll my own lists and/or maps? I'm just not sure what NSDictionary does and I'd like to know.

If you're wondering how to optimize A*, I'd first ask you if you're using platform-independent extensions, like Iterative Deepening A* (aka IDA*), what kind of a heuristic you're using, and if you're using caching (transposition tables, pattern databases). The questions you're asking are too close to the metal for the moment, because you're optimizing parts of the system which are likely not holding you back.
Have a look at these course slides (especially lecture 10 and lecture 11)

Absolutely it makes a difference - I recently changed a naive implementation of A* using NSArray (is something in the list? iterate to find out...) for the lists and adjacents for NSDictionary (in the list? objectForKey!) and increased performance from not acceptable to acceptable with not too much work.

Related

Chess programming, Scala, JVM: How costly is dynamic dispatch? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
My aim is to write a basic chess playing AI. It doesn't need to be incredible, but I want it to play with some degree of competence against people with some level of familiarity with the game.
I have a trait called Piece which has abstract methods canMakeMove(m: Move, b: Board) and allMovesFrom(p: Position, b: Board). These methods are important to the program logic for obvious reasons, and are implemented by the concrete classes King, Queen, Pawn, Rook, Bishop, and Knight. Thus in other places, say for example in the code that determines whether or not a particular board has a King in check, these methods are called on values whose type is the abstract type Piece, (piece canMakeMove (..., ...)) so the actual method called is determined at runtime via dynamic dispatch.
I'm wondering if this is too costly for the purposes of a Chess AI program which is going to have to execute this code many times. After looking around online and reading some more about chess programming I've found that the most common representation of a chess board is not like my Vector[Vector[Option[Piece]] but an int matrix ('bit board') that likely uses a switch statement on the values in the board to accomplish the effect that I'm currently relying on dynamic dispatch to achieve. Will this prevent my AI from reaching viable performance level?
I will suggest in this answer that you are subject to a common pitfall in programming.
There is one very important wisdom that I learned from my colleagues, and it has proven its worth to me time and time again:
Only solve problems when they occur, and not any sooner.
Keep in mind that there have been chess programs and chess computers around for a long, long time.
There were even chess programs on the C64, and there were those tiny travel chess computers that were embedded in the chess board, so you could carry them around easily.
Those systems had far less resources than your present day computer by several orders of magnitude.
If you had a similar algorithm as the one that was running on those old systems, and that algorithm was implemented in Scala, with dynamic dispatch, generous vectors instead of bit arrays et cetera, they would still be faster than they were on the older hardware from the past days.
Still, writing even a moderate search heuristic for the chess problem is a colossal task for a single developer.
When faced with a colossal task, one intuitively tends to look for the first bit of the problem that one feels competent in solving, in hopes that this will then naturally lead to the next little problem that you can solve, and so on, eventually leading to a full solution to the colossal problem. Divide and conquer.
However, my personal experience tells me that this is not a good way to tackle bigger programming problems.
Usually, if you follow this path, you end up with a really, really optimized representation of the chess board, consuming only very little memory, and optimized for runtime as much as possible.
And then you're stuck. You have put all your energy into this, you're really proud of your great chessboard class, and somehow it's really dissatisfying that you feel no step closer to actually have a working chess AI.
That's where the rule that I mentioned in the beginning comes in handy.
In short: Don't optimize. Yet. You can still optimize later, if it's necessary.
Instead, do the following:
Make a class or trait that represents a chessboard.
Leave it empty in the beginning - no methods or members.
Do the same with the types representing pieces, board positions, moves etc.
Then, when you start implementing the chess AI, fill in those types with methods. But only add the methods that you really need, and only when you need them, and not any single method more than that.
Prefer to use traits in the beginning, as you can later on produce more optimized versions of a trait, if necessary.
In that way, try to get as soon as you can to the first version of your chess AI that does something interesting.
It's up to you to define what "interesting" means in this context.
It's okay if the first version of the chess AI is crappy, and loses every game because it makes really bad decisions.
It should just have that one single thing: It should do something that you find at least remotely interesting.
Then, identify the top problem that you have with your chess AI.
Think about how to solve it, come up with a plan that solves only that one problem, in the simplest way that you can possibly imagine, even if your programmer instincts tell you to do something more complex.
Resist the urge to delve into complexity fast, because the most complex code that you write will eventually turn out to be your biggest problem, and also the most inflexible part of your code base.
Short, simple, almost stupid steps towards a solution. Prototype a lot, create simple versions in a short time, and only optimize when needed.
As for the optimization - that's really not a problem.
For example, let's say that in the beginning you thought it's a good idea to define a board position like this:
case class Position(x:Int, y:Int)
Later on, you figure that it would have been a better choice to just use a tuple of ints to represent a board position. Simply substitute your previous definition with:
type Position = (Int, Int)
There you go. Do that change, compile the code, and the compile errors will show you all the places in the code that you need to adapt. It's not going to take very long to refactor this.
Even later down the line, you decide that since there are only 64 possible positions on the board, you can easily represent the position with a number in the range from 0 to 64 - reserving the number 64 for "off-board".
Again, that's an easy change:
type Position = Byte
Save, compile, fix the compile errors. Should not take longer than 15 minutes.
With this example, I want to illustrate that you shouldn't be afraid of optimizing later, waiting to solve problems only when they actually occur.
This will always require some refactoring, but the effort is usually not really worth mentioning.
I tend to think of this as "massaging the code".
Of course we know that junior programmers sometimes tend to write "spaghetti code" that is very hard to refactor later.
Maybe that's why there is a certain fear of late optimization that practically every developer knows, and the urge to optimize early.
The only real antidote against such "spaghetti code" is to go through this painful process several times. After a while, you will develop a very good intuition on how to write code that can be optimized and refactored easily.
This will match all the common programming wisdoms, like separation of concerns, short methods, encapsulation and so on.
At this point, I would recommend the book "Clean Code" as an excellent guideline.
Some more tips:
See your program as a bridge. You stand on one side of the bridge, starting off with nothing. On the far end, there is your vision - a decent chess AI. Between the ends, there is a gap. Your program will be the bridge. Don't start by putting all your energy into optimizing the very first brick that you put in your bridge. A bridge with just one perfect brick won't hold. Build a prototype bridge first - crappy, but it connects the two ends. When you have that, it's easier to enhance the crappy bridge step by step.
Abstract all the functionality into traits and classes. Be wary of code repetitions - when you find a bug or need to do a refactoring, repetitious code can be a neck breaker.
Don't be afraid to write code that doesn't look smart, as long as the code is simple and solves the problem. Bonus points for code that reads well and looks almost like an intuitive description of the algorithm.
Keep your methods short. One to six lines.
Make sure that your code doesn't nest too deeply - everyone with a bit of Scala experience should be able to understand what your code is doing.
It can be a good thing to spend some time in order to find good names for your concepts. Maybe there are even some standards of naming certain parts of a chess AI. It's good to do a bit of research first.
While you are researching, see if there are some Scala/Java libraries available that you can use, for important parts of your problem. You can still replace the libraries with your own implementations later, but in the beginning, using the knowledge of other people can give you a jump start into creating your first, prototype version "bridge". Everything that will help you close the "gap" quickly is good.
Don't think of your code as something eternal, perfect, unchangeable. Maybe you have the ambition to write the best representation of a chess board in Scala ever. Don't do that. Eternal, perfect code cannot be refactored or changed easily. Code that can be refactored easily is really good code. Refactoring code is 50% of what a programmer does.
Jump into automated testing. For Scala, using the scalatest library can be a good choice. Use a tool like SBT that will run all the automated tests on every build. If you have certain assumptions about the classes you write, hardcode those assumptions into automated tests. It will feel a little stupid and redundant at first. But the first time that one of your automated tests show you the cause of a bug that would otherwise have been very hard to find, all the time to write those automated tests will have paid out.
An advanced topic about automated testing is code coverage. You can use a tool that will generate a coverage report, how much of your code is covered by automated tests. From my own experience, I would recommend jacoco4sbt.
And finally, the mandatory citation: "Optimization is the root of all evil." - There is a lot of wisdom in this one.
Good luck, and a whole lot of fun!

Difference between CFArray and NSArray

I'm looking for an easier way to manipulate audio buffers on the iPhone. Mainly I'm trying to avoid pointer issues and array count issues with C, but don't want to be slowed down by number objects like NSNumber or NSInteger which I would have to use with NSArray.
I've come across CFArray which seems like it might be a nice middle ground. Am I correct in this assumption? Or am I missing something?
Not really. CFArray is basically the same as NSArray, you can even cast between the two, this is called toll-free bridging.
CFArray (and its mutable counterpart) does allow you to specify your own callback functions for retaining and releasing the objects (pointers) in your array, which would allow you to store arbitrary pointers (not just NSObjects) in the array and implement your own memory management scheme, but I doubt that this would result in any real performance gains. For your use case, C arrays are probably the way to go.
If performance and memory size are of any concern, which they likely will be for real-time audio processing, stick to plain C arrays, and learn how to code array passing and access correctly.
You need to handle audio in C arrays anyway, as the iOS Audio Queue and Audio Unit APIs pass audio data using C arrays.
If you don't like C-style and want to use some need object oriented class-style I would really recommend to use C++, eg. std::vector or similar.
Do not use Objective-C for audio it's not made for signal processing! Obj-C is great for dealing with UI's and connecting objects within a signaling system. Any kind of 'real' computation should always be done in C or flat C++.

Objective-C Data Structures (Building my own DAWG)

After not programming for a long, long time (20+ years) I'm trying to get back into it. My first real attempt is a Scrabble/Words With Friends solver/cheater (pick your definition). I've built a pretty good engine, but it's solves the problems through brute force instead of efficiency or elegance. After much research, it's pretty clear that the best answer to this problem is a DAWG or CDWAG. I've found a few C implementations our there and have been able to leverage them (search times have gone from 1.5s to .005s for the same data sets).
However, I'm trying to figure out how to do this in pure Objective-C. At that, I'm also trying to make it ARC compliant. And efficient enough for an iPhone. I've looked quite a bit and found several data structure libraries (i.e. CHDataStructures ) out there, but they are mostly C/Objective-C hybrids or they are not ARC compliant. They rely very heavily on structs and embed objects inside of the structs. ARC doesn't really care for that.
So - my question is (sorry and I understand if this was tl;dr and if it seems totally a newb question - just can't get my head around this object stuff yet) how do you program classical data structures (trees, etc) from scratch in Objective-C? I don't want to rely on a NS[Mutable]{Array,Set,etc}. Does anyone have a simple/basic implementation of a tree or anything like that that I can crib from while I go create my DAWG?
Why shoot yourself in the foot before you even started walking?
You say you're
trying to figure out how do this in pure Objective-C
yet you
don't want to rely on a NS[Mutable]{Array,Set,etc}
Also, do you want to use ARC, or do you not want to use ARC? If you stick with Objective-C then go with ARC, if you don't want to use the Foundation collections, then you're probably better off without ARC.
My suggestion: do use NS[Mutable]{Array,Set,etc} and get your basic algorithm working with ARC. That should be your first and only goal, everything else is premature optimization. Especially if your goal is to "get back into programming" rather than writing the fastest possible Scrabble analyzer & solver. If you later find out you need to optimize, you have some working code that you can analyze for bottlenecks, and if need be, you can then still replace the Foundation collections.
As for the other libraries not being ARC compatible: you can pretty easily make them compatible if you follow some rules set by ARC. Whether that's worthwhile depends a lot on the size of the 3rd party codebase.
In particular, casting from void* to id and vice versa requires a bridged cast, so you would write:
void* pointer = (__bridge void*)myObjCObject;
Similarly, if you flag all pointers in C structs as __unsafe_unretained you should be able to use the C code as is. Even better yet: if the C code can be built as a static library, you can build it with ARC turned off and only need to fix some header files.

Speed Comparison: C++ vs Objective C [duplicate]

When programming a CPU intensive or GPU intensive application on the iPhone or other portable hardware, you have to make wise algorithmic decisions to make your code fast.
But even great algorithm choices can be slow if the language you're using performs more poorly than another.
Is there any hard data comparing Objective-C to C++, specifically on the iPhone but maybe just on the Mac desktop, for performance of various similar language aspects? I am very familiar with this article comparing C and Objective-C, but this is a larger question of comparing two object oriented languages to each other.
For example, is a C++ vtable lookup really faster than an Obj-C message? How much faster? Threading, polymorphism, sorting, etc. Before I go on a quest to build a project with duplicate object models and various test code, I want to know if anybody has already done this and what the results where. This type of testing and comparison is a project in and of itself and can take a considerable amount of time. Maybe this isn't one project, but two and only the outputs can be compared.
I'm looking for hard data, not evangelism. Like many of you I love and hate both languages for various reasons. Furthermore, if there is someone out there actively pursuing this same thing I'd be interesting in pitching in some code to see the end results, and I'm sure others would help out too. My guess is that they both have strengths and weaknesses, my goal is to find out precisely what they are so that they can be avoided/exploited in real-world scenarios.
Mike Ash has some hard numbers for performance of various Objective-C method calls versus C and C++ in his post "Performance Comparisons of Common Operations". Also, this post
by Savoy Software is an interesting read when it comes to tuning the performance of an iPhone application by using Objective-C++.
I tend to prefer the clean, descriptive syntax of Objective-C over Objective-C++, and have not found the language itself to be the source of my performance bottlenecks. I even tend to do things that I know sacrifice a little bit of performance if they make my code much more maintainable.
Yes, well written C++ is considerably faster. If you're writing performance critical programs and your C++ is not as fast as C (or within a few percent), something's wrong. If your ObjC implementation is as fast as C, then something's usually wrong -- i.e. the program is likely a bad example of ObjC OOD because it probably uses some 'dirty' tricks to step below the abstraction layer it is operating within, such as direct ivar accesses.
The Mike Ash 'comparison' is very misleading -- I would never recommend the approach to compare execution times of programs you have written, or recommend it to compare C vs C++ vs ObjC. The results presented are provided from a test with compiler optimizations disabled. A program compiled with optimizations disabled is rarely relevant when you are measuring execution times. To view it as a benchmark which compares C++ against Objective-C is flawed. The test also compares individual features, rather than entire, real world optimized implementations -- individual features are combined in very different ways with both languages. This is far from a realistic performance benchmark for optimized implementations. Examples: With optimizations enabled, IMP cache is as slow as virtual function calls. Static dispatch (as opposed to dynamic dispatch, e.g. using virtual) and calls to known C++ types (where dynamic dispatch may be bypassed) may be optimized aggressively. This process is called devirtualization, and when it is used, a member function which is declared virtual may even be inlined. In the case of the Mike Ash test where many calls are made to member functions which have been declared virtual and have empty bodies: these calls are optimized away entirely when the type is known because the compiler sees the implementation and is able to determine dynamic dispatch is unnecessary. The compiler can also eliminate calls to malloc in optimized builds (favoring stack storage). So, enabling compiler optimizations in any of C, C++, or Objective-C can produce dramatic differences in execution times.
That's not to say the presented results are entirely useless. You could get some useful information about external APIs if you want to determine if there are measurable differences between the times they spend in pthread_create or +[NSObject alloc] on one platform or architecture versus another. Of course, these two examples will be using optimized implementations in your test (unless you happen to be developing them). But for comparing one language to another in programs you compile… the presented results are useless with optimizations disabled.
Object Creation
Consider also object creation in ObjC - every object is allocated dynamically (e.g. on the heap). With C++, objects may be created on the stack (e.g. approximately as fast as creating a C struct and calling a simple function in many cases), on the heap, or as elements of abstract data types. Each time you allocate and free (e.g. via malloc/free), you may introduce a lock. When you create a C struct or C++ object on the stack, no lock is required (although interior members may use heap allocations) and it often costs just a few instructions or a few instructions plus a function call.
As well, ObjC objects are reference counted instances. The actual need for an object to be a std::shared_ptr in performance critical C++ is very rare. It's not necessary or desirable in C++ to make every instance a shared, reference counted instance. You have much more control over ownership and lifetime with C++.
Arrays and Collections
Arrays and many collections in C and C++ also use strongly typed containers and contiguous memory. Since the address of the next element's members are often known, the optimizer can do much more, and you have great cache and memory locality. With ObjC, that's far from reality for standard objects (e.g. NSObject).
Dispatch
Regarding methods, many C++ implementations use few virtual/dynamic calls, particularly in highly optimized programs. These are static method calls and fodder for the optimizers.
With ObjC methods, each method call (objc message send) is dynamic, and is consequently a firewall for the optimizer. Ultimately, that results in many restrictions or inconveniences regarding what you can and cannot do to keep performance at a minimum when writing performance critical ObjC. This may result in larger methods, IMP caching, frequent use of C.
Some realtime applications cannot use any ObjC messaging in their render paths. None -- audio rendering is a good example of this. ObjC dispatch is simply not designed for realtime purposes; Allocations and locks may happen behind the scenes when messaging objects, making the complexity/time of objc messaging unpredictable enough that the audio rendering may miss its deadline.
Other Features
C++ also provides generics/template implementations for many of its libraries. These optimize very well. They are typesafe, and a lot of inlining and optimizations may be made with templates (consider it polymorphism, optimization, and specialization which takes place at compilation). C++ adds several features which just are not available or comparable in strict ObjC. Trying to directly compare langs, objects, and libraries which are very different is not so useful -- it's a very small subset of actual realizations. It's better to expand the question to a library/framework or real program, considering many aspects of design and implementation.
Other Points
C and C++ symbols can be more easily removed and optimized away in various stages of the build (stripping, dead code elimination, inlining and early inlining, as well as Link Time Optimization). The benefits of this include reduced binary sizes, reduced launch/load times, reduced memory consumption, etc.. For a single app, that may not be such a big deal; but if you reuse a lot of code, and you should, then your shared libraries could add a lot of unnecessary weight to the program, if implemented ObjC -- unless you are prepared to jump through some flaming hoops. So scalability and reuse are also factors in medium/large projects, and groups where reuse is high.
Included Libraries
ObjC library implementors also optimize for the environment, so its library implementors can make use of some language and environment features to offer optimized implementations. Although there are some pretty significant restrictions when writing an optimized program in pure ObjC, some highly optimized implementations exist in Cocoa. This is one of Cocoa's strong points, although the C++ standard library (what some people call the STL) is no slouch either. Cocoa operates at a much higher level of abstraction than C++ -- if you don't know well what you're doing (or should be doing), operating closer to the metal can really cost you. Falling back on to a good library implementation if you are not an expert in some domain is a good thing, unless you are really prepared to learn. As well, Cocoa's environments are limited; you can find implementations/optimizations which make better use of the OS.
If you're writing optimized programs and have experience doing so in both C++ and ObjC, clean C++ implementations will often be twice as fast or faster than clean ObjC (yes, you can compare against Cocoa). If you know how to optimize, you can often do better than higher level, general purpose abstractions. Although, some optimized C++ implementations will be as fast as or slower than Cocoa's (e.g. my initial attempt at file I/O was slower than Cocoa's -- primarily because the C++ implementation initializes its memory).
A lot of it comes down to the language features you are familiar with. I use both langs, they both have different strengths and models/patterns. They complement each other quite well, and there are great libraries for both. If you're implementing a complex, performance critical program, correct use of C++'s features and libraries will give you much more control and provide significant advantages for optimization, such that in the right hands, "several times faster" is a good default expectation (don't expect to win every time, or without some work, however). Remember, it takes years to understand C++ well enough to really reach that point.
I keep the majority of my performance critical paths as C++, but also recognize that ObjC is also a very good solution for some problems, and that there are some very good libraries available.
It's very hard to collect "hard data" for this that's not misguiding.
The biggest problem with doing a feature-to-feature comparison like you suggest is that the two languages encourage very different coding styles. Objective-C is a dynamic language with duck typing, where typical C++ usage is static. The same object-oriented architecture problem would likely have very different ideal solutions using C++ or Objective-C.
My feeling (as I have programmed much in both languages, mostly on huge projects): To maximize Objective-C performance, it has to be written very close to C. Whereas with C++, it's possible to make much more use of the language without any performance penalty compared to C.
Which one is better? I don't know. For pure performance, C++ will always have the edge. But the OOP style of Objective-C definitely has its merits. I definitely think it is easier to keep a sane architecture with it.
This really isn't something that can be answered in general as it really depends on how you use the language features. Both languages will have things that they are fast at, things that they are slow at, and things that are sometimes fast and sometimes slow. It really depends on what you use and how you use it. The only way to be certain is to profile your code.
In Objective C you can also write c++ code, so it might be easier to code in Objective C for the most part, and if you find something that doesn't perform well in it, then you can have a go at writting a c++ version of it and seeing if that helps (C++ tends to optimize better at compile time). Objective C will be easier to use if APIs you are interfacing with are also written in it, plus you might find it's style of OOP is easier or more flexible.
In the end, you should go with what you know you can write safe, robust code in and if you find an area that needs special attention from the other language, then you can swap to that. X-Code does allow you to compile both in the same project.
I have a couple of tests I did on an iPhone 3G almost 2 years ago, there was no documentation or hard numbers around in those days. Not sure how valid they still are but the source code is posted and attached.
This isn't a very extensive test, I was mainly interested in NSArray vs C Array for iterating a large number of objects.
http://memo.tv/nsarray_vs_c_array_performance_comparison
http://memo.tv/nsarray_vs_c_array_performance_comparison_part_ii_makeobjectsperformselector
You can see the C Array is much faster at high iterations. Since then I've realized that the bottleneck is probably not the iteration of the NSArray but the sending of the message. I wanted to try methodForSelector and calling the methods directly to see how big the difference would be but never got round to it. According to Mike Ash's benchmarks it's just over 5x faster.
I don't have hard data for Objective C, but I do have a good place to look for C++.
C++ started as C with Classes according to Bjarne Stroustroup in his reflection on the early years of C++ (http://www2.research.att.com/~bs/hopl2.pdf), so C++ can be thought of (like Objective C) as pushing C to its limits for object orientation.
What are those limits? In the 1994-1997 time frame, a lot of researchers figured out that object-orientation came at a cost due to dynamic binding, e.g. when C++ functions are marked virtual and there may/may not be children classes that override these functions. (In Java and C#, all functions expect ctors are inherently virtual, and there isnt' much you can do about it.) In "A Study of Devirtualization Techniques for a Java Just-In-Time Compiler" from researchers at IBM Research Tokyo, they contrast the techniques used to deal with this, including one from Urz Hölzle and Gerald Aigner. Urz Hölzle, in a separate paper with Karel Driesen, had shown that on average 5.7% of time in C++ programs (and up to ~50%) was spent in calling virtual functions (e.g. vtables + thunks). He later worked with some Smalltalk researachers in what ended up the Java HotSpot VM to solve these problems in OO. Some of these features are being backported to C++ (e.g. 'protected' and Exception handling).
As I mentioned, C++ is static typed where Objective C is duck typed. The performance difference in execution (but not lines of code) probably is a result of this difference.
This study says to really get the performance in a CPU intensive game, you have to use C. The linked article is complete with a XCode project that you can run.
I believe the bottom line is: Use Objective-C where you must interact with the iPhone's functions (after all, putting trampolines everywhere can't be good for anyone), but when it comes to loops, things like vector object classes, or intensive array access, stick with C++ STL or C arrays to get good performance.
I mean it would be totally silly to see position = [[Vector3 alloc] init] ;. You're just asking for a performance hit if you use references counts on basic objects like a position vector.
yes. c++ reign supreme in performance/expresiveness/resource tradeoff.
"I'm looking for hard data, not evangelism". google is your best friend.
obj-c nsstring is swapped with c++'s by apple enginneers for performance. in a resource constrained devices, only c++ cuts it as a MAINSTREAM oop language.
NSString stringWithFormat is slow
obj-c oop abstraction is deconstructed into procedural-based c-structs for performance, otherwise a MAGNITUDE order slower than java! the author is also aware of message caching - yet no-go. so modeling lots of small players/enemies objects is done in oop with c++ or else, lots of Procedural structs with a simple OOP wrapper around it with obj-c. there can be one paradigm that equates Procedural + Object-Oriented Programming = obj-c.
http://ejourneyman.wordpress.com/2008/04/23/writing-a-ray-tracer-for-cocoa-objective-c/

Basic Profanity Filter in Objective C for iPhone

How have you like minded individuals tackled the basic challenge of filtering profanity, obviously one can't possibly tackle every scenario but it would be nice to have one at the most basic level as a first line of defense.
In Obj-c I've got
NSString *tokens = [text componentsSeparatedByString:#" "];
And then I loop through each token to see if any of the keywords (I've got about 400 in a list) are found within each token.
Realising False positives are also a problem, if the word is a perfect match, its flagged as profanity otherwise if more than 3 words with profanity are found without being perfect matches it is also flagged as profanity.
Later on I will use a webservice that tackles the problem more precisely, but I really just need something basic. So if you wrote the word penis it would go yup naughty naughty, bad word written.
Obscenity Filters: Bad Idea, or Incredibly Intercoursing Bad Idea?
Jeff has an interesting article to consider before embarking on such a piece of code:
http://www.codinghorror.com/blog/2008/10/obscenity-filters-bad-idea-or-incredibly-intercoursing-bad-idea.html
I just have a suggestion for tokenizing the string. Your ways works well if the words are all separated by strings but that is rarely the case in most usage scenarios as you would normally have to deal with newlines, punctuation, etc. Try this if you are interested:
NSMutableCharacterSet *separators = [NSMutableCharacterSet punctuationCharacterSet];
[separators formUnionWithCharacterSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];
NSArray *words = [bigString componentsSeparatedByCharactersInSet:separators];
Source: http://www.tech-recipes.com/rx/3418/cocoa-explode-break-nsstring-into-individual-words/
Well, searching in that manner is certainly not the most efficient way to search for profanity... a more efficient approach would be to construct a finite state automaton to detect the words, and run the text once through that FSA. You don't really need to split strings to find profanity, and all that splitting adds extra allocation and copying overhead that you don't need. Also, there may be common patterns in some of the blacklisted words, which you are not exploiting by searching each word individually.
That said, I think 400 words is quite a lot. Who, exactly, is your audience? What if a user has a medical question? Should such questions actually be disallowed? I can only think of a handful of words that would be considered profane in any context, so you might want to rethink the filtering.
A couple of things:
FSA won't necessarily work depending on how intelligent you want the filter to be
Regex are generally extremely slow depending on how many you want to run
400 words is somewhat low, depending on your needs and langauges
There are a number of extremely tricky cases to be careful of when filtering, particularly embedding of words such as "ASSume"
My company, Inversoft, builds a commercial filtering solution and it is quite intelligent. It doesn't use regex or FSA, but has a custom built fast-linear processing technology that makes it extremely fast and accurate (4,000+ messages per second). It also has over 600 English words in a number of categories including Slang, Racial Slurs, Drug, Gang, Religious, etc.
If you are looking for an intelligent context-aware solution with support, you should check out Clean Speak from Inversoft. Hooking it up to Obj-C should be simple using the XML WebService.