I'm experimenting with ARM assembly language, during this I often use inline assembly blocks in Objective-C code for my first iOS 6 small pet project. During this I wondered:
Is it reasonable at all to use ARM assembly in large commercial iOS projects?
Can I optimise some bottlenecks with it and exceed compiler in this?
What are some common guidelines or best practicies of using assembly language for iOS: vectorization, NEON, SIMD, media optimisation (image shrinking and etc.)?
The iPhone is just a small computer so the trade-offs of using assembler are exactly the same as with any other machine.
The general answer has to be no, you don't want to be using assembler.
Having said that, there are always exceptions where hand-tuned, very low level code might be faster. Make sure you use Instruments to correctly identify your bottlenecks and that you've tuned your code using higher-level techniques before you start.
Use special hardware features where it makes sense, but bear in mind that you don't need to use assembler to get the benefits of most of them.
Related
Simple question really, however there doesn't seem to be a straight answer in the current developer documentation.
Does Swift compile to machine language (i.e. assembly), or does it compile to some intermediary form that then runs on a virtual machine?
(I suspect it does, but being unfamiliar with development in Apple's world it is not clear to me like it may be to someone who is.)
Yes, it compiles to machine language by way of LLVM Bitcode and, as #connor said, runs on top of the Objective-C runtime.
Swift not only compiles to native machine code but it has also been designed specifically for it. Unlike e.g. Java which has been designed specifically as a JITed language. By that I mean Swift achieves best performance with ahead of time compilation while Java benefits most from JITing.
There are many reasons for these design choices but among them is that Swift has a much bigger scope than managed languages like Java. It is supposed to work on both desktop computers and phones with more restricted hardware. You can use Swift as a systems programming language unlike say C#, Java or Python because it has little runtime requirements and allow fairly detailed control of memory. So in theory one should be able to build an OS kernel with Swift which would be difficult with say Java.
Swift, just like objective-c compiles to native code using llvm
A good explanation can be found in Apple's top secret Swift language grew from work to sustain Objective C, which it now aims to replace
From that article, talking about Swift
The compiler is optimized for performance, and the language is
optimized for development, without compromising on either.
Swift, like Objective-C, is compiled to machine code that runs on the Objective-C runtime.
I have integrated Lua with my ObjC code (iphone game). The setup was pretty easy, but now, I have a little problem with the bridging. I have googled for results, etc... and it seems there isn't anything that could work without modifications. I mean, I have checked luaobjc bridge (it seems pretty old and dicontinued), I heard about LuaCocoa but it seems not to work on iphone, and wax is too thick.
My needs are pretty spare, I just need to be able to call objc methods from lua and don't mind having to do extra work to make it work (I don't need a totally authomatic bridging system).
So, I have decided to build a little bridge myself based on this page http://anti-alias.me/?p=36. It has key information about how to accomplish what I need, but the tutorial is not completed and I have some doubts about how to deal with method overloading when called from lua, etc...
Do anybody know if there exist any working bridge between objc and lua on the iphone or if it could be so hard to complete the bridge the above site offers?
Any information will be welcomed.
Don't reinvent the wheel!
First, you are correct that luaobjc and some other variants are outdated. A good overview can be found on the LuaCocoa page. LuaCocoa is fine but apparently doesn't support iPhone development, so the only other choice is Wax. Both LuaCocoa and Wax are runtime bridges, which means that you can (in theory) access every Objective-C class and method in Lua at the expense of runtime performance.
For games and from my experience the runtime performance overhead is so significant that it doesn't warrant the use of any runtime binding library. From a perspective of why one would use a scripting language, both libraries defy the purpose of favoring a scripting language over a lower-level language: they don't provide a DSL solution - which means you're still going to write what is essentially Objective-C code but with a slightly different syntax, no runtime debugging support, and no code editing support in Xcode. In other words: runtime Lua binding is a questionable solution at best, and has lots of cons going against it. Runtime Lua bindings are particularly unsuited for fast-paced action games aiming at a constantly high framerate.
What you want is a static binding. Static bindings at a minimum require you to declare what kind of methods will be available in Lua code. Some binding libraries scan your header files, others require you to provide a special declaration file similar to a header file. Most binding libraries can use both approaches. The benefit is optimal runtime performance, and being able to actually design what classes, methods and variables Lua scripts have access to.
There are but 3 candidates to bind Lua code to an iPhone app. To be fair, there are a lot more but most have one or more crucial flaws or are simply not stable or for special purposes only, or simply don't work for iPhone apps. The candidates are:
tolua and tolua++
luabind
SWIG
Big disadvantage shared by all Lua static binding libraries: none of them can bind directly to Objective-C code. All require to have an additional C or C++ layer available that ultimately interfaces with your Objective-C code. This has to do with how Objective-C works as a language and how small a role it has played (so far) when it comes to embedding Lua in Objective-C apps.
I recently evaluated all three binding libraries and came to enjoy SWIG. It is very well documented but has a bit of a learning curve. But I believe that learning curve is warranted because SWIG can be used to combine nearly any programming and scripting language, it can be advantageous to know how to use SWIG for future projects. Plus, once you understand their definition file implementation it turns out to be very easy (especially when compared to luabind) and considerably more flexible than tolua.
OK, bit late to the party but in case others come late also to this post here's another approach to add to the choices available: hand-code your LUA APIs.
I did a lecture on this topic where I live coded some basic LUA bindings in an hour. Its not hard. From the lecture I made a set of video tutorials that shows how to get started.
The approach of using a bindings generation tool like SWIG is a good one if you already have exactly the APIs that you need to call written in Objective-C and it makes sense to bring all those same API's over into LUA.
The pros of the hand-coding approach:
your project just compiles with one standard Xcode target
your project is all C & Obj-C
the LUA is just data shipped along with your images
no fiddling with "do I check in generated code" to Git
you create LUA functions for just the things you want
you can easily have hosted scripts that live inside an object
the API is under your control and is well known
dont expose engine APIs to level building team/tools
The last point is just that if you have detail functions that only make sense at the engine level and you don't want to see those when coding the game play you'll need to tell SWIG not to bind those.
Steffens answer is perfect and this approach is just another option, that may suit some folks better depending on the project.
I'm moving away from strict Android development and wanting to create iPhone applications. My understanding is that I can code the backend of iOS applications in C/C++ and also that I can use the NDK to include C/C++ code in Android apps. My question however is how? I've googled quite a bit and I can't find any clear and concise answers.
When looking at sample code for the NDK, it seems that all the function names etc. are Android (or at least Java) specific and so I would not be able to use this C/C++ backend to develop an iPhone frontend?
I'd appreciate some clarification on this issue and if at all available some code to help me out? (even just a simple Hello World that reads a string from a C/C++ file and displays it in an iOS and Android app).
Thanks guys
Chris
Note that I almost exclusively work on "business/utility/productivity" applications; things that rely heavily on fairly standard UI elements and expect to integrate well with their platform. This answer reflects that. See Mitch Lindgren's comment to Shaggy Frog's answer for good comments for game developers, who have a completely different situation.
I believe #Shaggy Frog is incorrect here. If you have effective, tested code in C++, there is no reason not to share it between Android and iPhone, and I've worked on projects that do just that and it can be very successful. There are dangers that should be avoided, however.
Most critically, be careful of "lowest common denominator." Self-contained, algorithmic code, shares very well. Complex frameworks that manage threads, talk on the network, or otherwise interact with the OS are more challenging to do in a way that doesn't force you to break the paradigms of the platform and shoot for the LCD that works equally badly on all platforms. In particular, I recommend writing your networking code using the platform's frameworks. This often requires a "sandwich" approach where the top layer is platform-specific and the very bottom layer is platform-specific, and the middle is portable. This is a very good thing if designed carefully.
Thread management and timers should also be done using the platform's frameworks. In particular, Java uses threads heavily, while iOS typically relies on its runloop to avoid threads. When iOS does use threads, GCD is strongly preferred. Again, the solution here is to isolate the truly portable algorithms, and let platform-specific code manage how it gets called.
If you have a complex, existing framework that is heavily threaded and has a lot of network or UI code spread throughout it, then sharing it may be difficult, but my recommendation still would be to look for ways to refactor it rather than rewrite it.
As an iOS and Mac developer who works extensively with cross-platform code shared on Linux, Windows and Android, I can say that Android is by far the most annoying of the platforms to share with (Windows used to hold this distinction, but Android blew it away). Android has had the most cases where it is not wise to share code. But there are still many opportunities for code reuse and they should be pursued.
While the sentiment is sound (you are following the policy of Don't Repeat Yourself), it's only pragmatic if what you can share that code in an efficient manner. In this case, it's not really possible to have a "write once" approach to cross-platform development where the code for two platforms needs to be written in different languages (C/C++/Obj-C on iPhone, Java for Android).
You'll be better off writing two different codebases in this case (in two different languages). Word of advice: don't write your Java code like it's C++, or your C++ code like it's Java. I worked at a company a number of years ago who had a product they "ported" from Java to C++, and they didn't write the C++ code like it was C++, and it caused all sorts of problems, not to mention being hard to read.
Writing a shared code base is really practical in this situation. There is some overhead to setting up and keeping it organized, but the major benefits are these 1) reduce the amount of code by sharing common functionality 2) Sharing bug fixes to the common code base. I'm currently aware of two routes that I'm considering for a project - use the native c/c++ (gains in speed at the expense of losing garbage collection and setting targets per processor) or use monodroid/monotouch which provide c# bindings for each os's platform functionality (I'm uncertain of how mature this is.)
If I was writing a game using 3d I'd definitely use approach #1.
I posted this same answer to a similar question but I think it's relevant so...
I use BatteryTech for my platform-abstraction stuff and my project structure looks like this:
On my PC:
gamename - contains just the common code
gamename-android - holds mostly BatteryTech's android-specific code and Android config, builders point to gamename project for common code
gamename-win32 - Just for building out to Windows, uses code from gamename project
On my Mac:
gamename - contains just the common code
gamename-ios - The iPhone/iPad build, imports common code
gamename-osx - The OSX native build. imports common code.
And I use SVN to share between my PC and Mac. My only real problems are when I add classes to the common codebase in Windows and then update on the mac to pull them down from SVN. XCode doesn't have a way to automatically add them to the project without scripts, so I have to pull them in manually each time, which is a pain but isn't the end of the world.
All of this stuff comes with BatteryTech so it's easy to figure out once you get it.
Besides using C/C++ share so lib.
If to develop cross-platform apps like game, suggest use mono-based framework like Unity3D.
Else if to develop business apps which require native UI and want to share business logic code cross mobile platforms, I suggest use Lua embedded engine as client business logic center.
The client UI is still native and get best UI performance. i.e Java on Android and ObjectC on iOS etc.
The logic is shared with same Lua scripts for all platform.
So the Lua layer is similar as client services (compare to server side services).
-- Anderson Mao, 2013-03-28
Though I don't use these myself as most of the stuff I write won't port well, I would recommend using something like Appcelerator or Red Foundry to build basic applications that can then be created natively on either platform. In these cases, you're not writing objective-c or java, you use some kind of intermediary. Note that if you move outside the box they've confined you to, you'll need to write your own code closer to the metal.
I'm doing some research on multicore processors; specifically I'm looking at writing code for multicore processors and also compiling code for multicore processors.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
I am aware of the following efforts (some of these don't seem directly related to multicore architectures, but seem to have more to do with parallel-programming models, multi-threading, and concurrency):
Erlang (I know that Erlang includes constructs to facilitate concurrency, but I am not sure how exactly it is being leveraged for multicore architectures)
OpenMP (seems mostly related to multiprocessing and leveraging the power of clusters)
Unified Parallel C
Cilk
Intel Threading Blocks (this seems to be directly related to multicore systems; makes sense as it comes from Intel. In addition to defining certain programming-constructs, it also seems have features that tell the compiler to optimize the code for multicore architectures)
In general, from what little experience I have with multithreaded programming, I know that programming with concurrency and parallelism in mind is definitely a difficult concept. I am also aware that multithreaded programming and multicore programming are two different things. in multithreaded programming you are ensuring that the CPU does not remain idle (on a single-CPU system. As James pointed out the OS can schedule different threads to run on different cores -- but I'm more interested in describing the parallel operations from the language itself, or via the compiler). As far as I know you cannot truly do parallel operations. In multicore systems, you should be able to perform truly-parallel operations.
So it seems to me that currently the problems facing multicore programming are:
Multicore programming is a difficult concept that requires significant skill
There are no native constructs in today's programming languages that provide a good abstraction to program for a multicore environment
Other than Intel's TBB library I haven't found efforts in other programming-languages to leverage the power of multicore architectures for compilation (for example, I don't know if the Java or C# compiler optimizes the bytecode for multicore systems or even if the JIT compiler does that)
I'm interested in knowing what other problems there might be, and if there are any solutions in the works to address these problems. Links to research papers (and things of that nature) would be helpful. Thanks!
EDIT
If I had to condense my question down to one sentence, it would be this: What are the problems that face multicore programming today and what research is going on in the field to solve these problems?
UPDATE
It also seems to me that there are three levels where multicore needs to be concerned:
Language level: Constructs/concepts/frameworks that abstract parallelization and concurrency and make it easy for programmers to express the same
Compiler level: If the compiler is aware of what architecture it is compiling for, it can optimize the compiled code for that architecture.
OS level: The OS optimizes the running process and perhaps schedules different threads/processes to run on different cores.
I've searched on ACM and IEEE and have found a few papers. Most of them talk about how difficult it is to think concurrently and also how current languages don't have a proper way to express concurrency. Some have gone so far as to claim that the current model of concurrency that we have (threads) is not a good way to handle concurrency (even on multiple cores). I'm interested in hearing other views.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
Inertia. (BTW: that's pretty much the answer to all "what does prevent the widespread adoption" questions, whether that be models of parallel programming, garbage collection, type safety or fuel-efficient automobiles.)
We have known since the 1960s that the threads+locks model is fundamentally broken. By ~1980, we had about a dozen better models. And yet, the vast majority of languages that are in use today (including languages that were newly created from scratch long after 1980), offer only threads+locks.
The major problems with multicore programming is the same as writing any other concurrent applications, but whereas before it was uncommon to have multiple cpus in a computer, now it is hard to find any modern computer with only one core in it, so, to take advantage of multicore, multiple cpu architectures there are new challenges.
But, this problem is an old problem, whenever computer architectures go beyond compilers then it seems the fallback solution is to move back toward functional programming, as that programming paradigm, if strictly followed, can make very parallelizable programs, as you don't have any global mutable variables, for example.
But, not all problems can be done easily using FP, so the goal then is how to easily get other programming paradigms to be easy to use on multicores.
The first thing is that many programmers have avoided writing good mulithreaded applications, so there isn't a strongly prepared number of developers, as they learned habits that will make their coding harder to do.
But, as with most changes to the cpu, you can look at how to change the compiler, and for that you can look at Scala, Haskell, Erlang and F#.
For libraries you can look at the parallel framework extension, by MS as a way to make it easier to do concurrent programming.
It is at work, but I recently either IEEE Spectrum or IEEE Computer had articles on multicore programming issues, so look at what IEEE and ACM articles have been written on these issues, to get more ideas as to what is being looked at.
I think the biggest impediment will be the difficulty to get programmers to change their language as FP is very different than OOP.
One place for research besides developing languages that will work well this way, is how to handle multiple threads accessing memory, but, as with much in this area, Haskell seems to be at the forefront in testing ideas for this, so you can look at what is going on with Haskell.
Ultimately there will be new languages, and it may be that we have DSLs to help abstract the developer more, but how to educate programmers on this will be a challenge.
UPDATE:
You may find Chapter 24. Concurrent and multicore programming of interest, http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html
One of the answers mentioned the Parallel Extensions for the .NET Framework and since you mentioned C#, it's definitely something I would investigate. Microsoft has done something interesting things there, though I have to think many of their efforts seem more suited for language enhancements in C# than a separate and distinct library for concurrent programming. But I think their efforts are worth applauding and respect that we're early here. (Disclaimer: I used to be the marketing director for Visual Studio about 3 years ago)
The Intel Thread Building Blocks are also quite interesting (Intel recently released a new version, and I'm excited to head down to Intel Developer Forum next week to learn more about how to use it properly).
Lastly, I work for Corensic, a software quality startup in Seattle. We've got a tool called Jinx that is designed to detect concurrency errors in your code. A 30-day trial edition is available for Windows and Linux, so you might want to check it out. (www.corensic.com)
In a nutshell, Jinx is a very thin hypervisor that, when activated, slips in between the processor and operating system. Jinx then intelligently takes slices of execution and runs simulations of various thread timings to look for bugs. When we find a particular thread timing that will cause a bug to happen, we make that timing "reality" on your machine (e.g., if you're using Visual Studio, the debugger will stop at that point). We then point out the area in your code where the bug was caused. There are no false positives with Jinx. When it detects a bug, it's definitely a bug.
Jinx works on Linux and Windows, and in both native and managed code. It is language and application platform agnostic and can work with all your existing tools.
If you check it out, please send us feedback on what works and doesn't work. We've been running Jinx on some big open source projects and already are seeing situations where Jinx can find bugs 50-100 times faster than simply stress testing code.
The bottleneck of any high-performance application (written in C or C++) designed to make efficient use of more than one processor/core is the memory system (caches and RAM). A single core usually saturates the memory system with its reads and writes so it is easy to see why adding extra cores and threads causes an application to run slower. If a queue of people can pass through a door one a time, adding extra queues will not only clog the door but also make the passage of any one individual through the door less efficient.
The key to any multi-core application is optimization of and economizing on memory accesses. This means structuring data and code to work as much as possible inside their own caches where they don't disturb the other cores with acceses to the common cache (L3) or RAM. Once in a while a core needs to venture there but the trick is to reduce those situations as much as possible. In particular, data needs to be structured around and adapted to cache lines and their sizes (currently 64 bytes) and code needs to be compact and not call and jump all over the place which also disrupts pipelines.
My experience is that efficient solutions are unique to the application in question. The generic guidelines (above) are a basis on which to construct code but the tweak changes resulting from profiling conclusions will not be obvious to those who were not themselves involved in the optimizing work.
Look up fork/join frameworks and work-stealing runtimes. Two names for the same, or at least related, approaches, which is to recursively subdivide large tasks into lightweight units, such that all available parallelism is exploited, without having to know in advance how much parallelism there is. The idea is that it should run at serial speed on a uniprocessor, but get a linear speedup with multiple cores.
Sort of a horizontal analogue of cache-oblivious algorithms if you look at it right.
But i'd say the main problem facing multicore programming is that the great majority of computations remain stubbornly serial. There's just no way to throw multiple cores at those computations and make them stick.
I have a set of functionality (classes) that I would like to share with an application I'm building for the iPhone and for the Blackberry (Java). Does anyone have any best practices on doing this?
This is not going to be possible as far as I understand your question - the binary format for the iPhone and Java are not compatible - and even for a native library on a blackberry device.
This is not like building for OS X where you can use Java unfornately the iPhone doesn't support Java.
The best idea is probably to build you library in Objective-C and then port it to Java which is an easier transition than going the other way. If you programme for Objective-C and make sure you code has no memory leaks - then the changes are not so complex.
If you keep the structure of your classes the same then you should find maintenance much simpler - fix a bug in the Java and you should find it easy to check for the same bug in the ObjC methods etc.
Hope this helps - sorry that it is not all good news.
As Grouchal mentioned - you are not going to be able to share any physical components of your application between the two platforms. However you should be able to share the logical design of your application if you carefully separate it into highly decoupled layers. This is still a big win because the logical application design probably accounts for a large part of your development effort.
You could aim to wrap the sections of the platform specific APIs (iPhone SDK etc.) that you use with your own interfaces. In doing so you are effectively hiding the platform specific libraries and making your design and code easier to manage when dealing with differences in the platforms.
With this in place you can write your core application code so that it appears very similar on either platform - even though they are written in different languages. I find Java and Objective-C to be very similar conceptually (at least at the level at which I use it) and would expect to be able to achieve parity with at least the following:
An almost identical set of Java and Objective-C classes with the same names and responsibilities
Java/Objective-C classes with similarly named methods
Java/Objective-C methods with the same responsibilities and logical implementations
This alone will make the application easier to understand across platforms. Of course the code will always look very different at the edges - i.e when you start dealing with the view, threading, networking etc. However, these concerns will be handled by your API wrappers which once developed should have fairly static interfaces.
You might also stand to benefit if you later developer further applications that need to be delivered to both platforms as you might find that you can reuse or extend your API wrappers.
If you are writing a client-server type application you should also try and keep as much logic on your server as possible. Keep the amount of extra business logic on the device to a minimum. The more you can just treat the device as a view layer the less porting you'll have to do over all.
Aside from that, following the same naming conventions and package structure across all the projects helps greatly, especially for your framework code.
The UI API's and usability paradigms for BlackBerry and iPhone are so different that it won't be possible in most cases to directly port this kind of logic between apps. The biggest mistake one could make (in my opinion) is to try and transplant a user experience designed for one mobile platform on to another. The way people interact with BlackBerrys vs iPhones is very different so be prepared to revamp your user experience for each mobile platform you want to deploy on.
Hope this is helpful.
It is possible to write C++ code that works in both a BB10 Native app and an iOS app.
XCode would need to see the C++ files as ObjectiveCPP code.
I am currently working on such a task in my spare time. I have not yet completed it enough to either show or know if it is truly possible, but I haven't run in to any road-blocks yet.
You will need to be disciplined to write good cross-platform code designed w/ abstractions for platform-specific features.
My general pattern is that I have "class Foo" to do cross platform stuff, and a "class FooPlatform" to do platform specific stuff.
Class "Foo" can call class "FooPlatform" which abstracts out anything platform specific.
The raw cross-platform code is itself not compile-able on its own.
Separate BB10 and XCode projects are created in their respective IDEs.
Each project implements a thin (few [dozen] line) "class FooPlatform" and references the raw cross-platform code.
When I get something working that I can show I will post again here...