Application entry and call sequence when .ipa start up - iphone

I migrate recently from C programming to iphone development with Xcode. It seems the IDE hides a lot of trivial things and I'm curious about what's going on under the hood.
I find a file named main.m in my project. Inside this function, UIApplicationMain(argc, argv, nil, nil); is invoked. My question is what tasks will the UIApplicationMain completes? Can I step into this function tracking its execution?
Any hints will be appreciated.
Thanks and best regards.

iOS Cocoa Touch apps use an event driven paradigm, instead of being strictly sequential procedural code.
UIApplicationMain() tells the OS and Objective C runtime to set things up (plist'ed defaults, main nib, run loop, etc.) and then have the main run loop start dispatching to methods within your app. Put breakpoints at the beginning of all your (init/load/event handling) methods to see what the OS starts calling first.
The OS does a whole bunch of things you can't step into. You have to wait until it's good and ready to call methods within your app.

UIApplicationMain is the entry point for the Cocoa app. It sets up the app's primary application class and its delegate, and starts running the event loop. It doesn't return.
From the documentation:
This function instantiates the application object from the principal
class and and instantiates the delegate (if any) from the given class
and sets the delegate for the application. It also sets up the main
event loop, including the application’s run loop, and begins
processing events. If the application’s Info.plist file specifies a
main nib file to be loaded, by including the NSMainNibFile key and a
valid nib file name for the value, this function loads that nib file.
You don't really want to step into the guts of that specific function, because there's just a whole stew of binary instructions to look at that won't be enlightening. But the way to think of it, is that it's the "container" function for your whole app. Once it does its setup work, it starts event processing in an run loop, and will call out into your code when appropriate. It will be at the bottom of the callstack for all your app's code on the main thread.
Check out the great diagrams on this page for more about an app's lifecycle, which should give you some hints about where to put breakpoints in your own code, and how they will be called:
http://www.codeproject.com/KB/iPhone/ApplicationLifeCycle.aspx

Related

How to write unit tests for the AppDelegate?

I am doing a lot of setup inside my app delegate (mainly for CoreData) inside of applicationDidFinishLaunchingWithOptions. And was curious how i would go about testing code inside the appDelegate? Thanks
Step one: Stop using your regular application delegate during testing. This avoids the "it will be called at launch" problem, and will likely also speed up your tests. See https://qualitycoding.org/ios-app-delegate-testing/
Step two: Now that your regular application delegate isn't invoked when tests are launched, directly call its methods from tests.
Move the functionality into smaller functions or other classes that you can test.
If you keep things in the App Delegate class, you can access them the normal way since the unit tests are linked to the app and the app is actually run. But you cannot call applicationDidFinishLaunchingWithOptions and expect it to work. It will be called by iOS at the start, like normal.

Why must UIKit operations be performed on the main thread?

I am trying to understand why UI operations can't be performed using multiple threads. Is this also a requirement in other frameworks like OpenGL or cocos2d?
How about other languages like C# and javascript? I tried looking in google but people mention something about POSIX threads which I don't understand.
In Cocoa Touch, the UIApplication i.e. the instance of your application is attached to the main thread because this thread is created by UIApplicatioMain(), the entry point function of Cocoa Touch. It sets up main event loop, including the application’s run loop, and begins processing events. Application's main event loop receives all the UI events i.e. touch, gestures etc.
From docs UIApplicationMain(),
This function instantiates the application object from the principal class and instantiates the delegate (if any) from the given class and sets the delegate for the application. It also sets up the main event loop, including the application’s run loop, and begins processing events. If the application’s Info.plist file specifies a main nib file to be loaded, by including the NSMainNibFile key and a valid nib file name for the value, this function loads that nib file.
These application UI events are further forwarded to UIResponder's following the chain of responders usually like UIApplication->UIWindow->UIViewController->UIView->subviews(UIButton,etc.)
Responders handle events like button press, tap, pinch zoom, swipe etc. which get translated as change in the UI. Hence as you can see these chain of events occur on main thread which is why UIKit, the framework which contains the responders should operate on main thread.
From docs again UIKit,
For the most part, UIKit classes should be used only from an application’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your application’s user interface in any way.
EDIT
Why drawRect needs to be on main thread?
drawRect: is called by UIKit as part of UIView's lifecycle. So drawRect: is bound to main thread. Drawing in this way is expensive because it is done using the CPU on the main thread. The hardware accelerate graphics is provided by using the CALayer technique (Core Animation).
CALayer on the other hand acts as a backing store for the view. The view will then just display cached bitmap of its current state. Any change to the view properties will result in changes in the backing store which get performed by GPU on the backed copy. However, the view still needs to provide the initial content and periodically update view. I have not really worked on OpenGL but I think it also uses layers(I could be wrong).
I have tried to answer this to the best of my knowledge. Hope that helps!
from : https://www.objc.io/issues/2-concurrency/thread-safe-class-design/
It’s a conscious design decision from Apple’s side to not have UIKit be thread-safe. Making it thread-safe wouldn’t buy you much in terms of performance; it would in fact make many things slower. And the fact that UIKit is tied to the main thread makes it very easy to write concurrent programs and use UIKit. All you have to do is make sure that calls into UIKit are always made on the main thread.
So according to this the fact that UIKit objects must be accessed on the main thread is a design decision by apple to favor performance.
C# behaves the same (see eg here: Keep the UI thread responsive). UI updates have to be done in the UI thread - most other things should be done in the background hen possible.
If that wouldn't be the case there would probably be a synchronization hell between all updates that have to be done in the UI ...
Every system, every library, needs to be concerned about thread safety and must do things to ensure thread safety, while at the same time looking after correctness and performance as well.
In the case of the iOS and MacOS X user interface, the decision was made to make the UI thread safe by only allowing UI methods to be called and executed on the main thread. And that's it.
Since there are lots of complicated things going on that would need at least serialisation to prevent total chaos from happening, I don't see very much gained from allowing UI on a background thread.
Because you want the user to be able to see the UI changes as they happen. If you were to be able to perform UI changes in a background thread and display them when complete, it would seem the app doesn't behave right.
All non-UI operations (or at least the ones that are very costly, like downloading stuff or making database queries) should take place on a background thread, whereas all UI changes must always happen on the main thread to provide as smooth of a user experience possible.
I don't know what it's like in C# for Windows Phone apps, but I would expect it to be the same. On Android the system won't even let you do things like downloading on the main thread, making you create a background thread directly.
As a rule of thumb - when you think main thread, think "what the user sees".

Zxing Library for Iphone

Can I use the ZXing Library and scan a qr code in the background of my iphone app? I do not want the camera overlay with the square that looks for the qr code and the cancel button (as is shown in the ScanTest Example). What I need is that pressing the scan button will activate the reading of the QR code, and when the QR Code is read, how do I return the text to my application, so I can display it in a UILabel on the screen.
Can anyone show some example code in Objective-C for this? Thanks.
I did something similar, and can provide you with some guidance, but can't share source code.
Take a look at ZXingWidgetController.mm,.h files. This is a fully functioning QRcode scanning app that you can compile, so it can be reverse engineered into just containing the backend code. Edit the .h so the class extends NSObject instead of UIViewController, then delete any class properties and instance variables that are GUI objects.
That will cause xcode to find and mark all the methods and variables that you no longer need with warnings/errors in the .mm file (willAppear, etc.). Most of this code can be deleted, but be mindful to move allocations/deallocations to constructors/deconstructors.
In the viewController you can create an instance of this class, and call the class to start scanning. You need to modify the didDecodeImage in the ZXingWidgetController.mm file to do what you want it to do when it successfully gets a result from the QR code. One possibility is to modify the constructor to take your parent view controller as a parameter, store it in an instance variable as a delegate (__weak), then use that to call one of its functions in the didDecodeImage method. Other people might pass the data back to your main code using notifications.
Hope this helps!
There are a set of classes in the zxing objc directory that operate at the CA level rather than the UIView level that might be easier to modify than the widget, which operates at the UIViewController level.
This would still take a little tweaking, though, because the core capture code tracks whether the view is on screen or not to automatically start and stop the capturing of frames.

UITextView delegates problem

I am trying to access the UITextView delegates and have a problem
I have a UIViewController with the UITextViewDelegate protocol and a Nib containing the textView.
If I set the delegate inside viewDidLoad like "textView.delegate = self" and I touch the textView, the app crashes without logging errors.
If I start editing the textView with code like "[textView becomeFirstResponder]" all delegates get called.
When I set the delegate in the Nib creating a connection between the textView and the File's owner and deleting "textView.delegate = self" also no delegates get called.
What am I doing wrong here?
Regards,
Elias
It's not easy to help you without more description, posted code or a xib file.
You say application crashes without any logging errors - well, do you mean that there's no output in console's window ? That is normal, for an app that has crashed.
Anyway, you should be able to get the stack-trace to figure out where approximately the application has crashed. Open the debugger (⇧⌘Y), and see the position. That should give you an idea of what went wrong.
Here you can see an example of such debugger session (after EXC_BAD_CRASH):
First two lines doesn't give us much information, but later on we can see that application has crashed while loading user interface from a NIB file. Well, usually the only code that executes during such load are awakeFromNib methods - it's up to you to find a problem along those lines.
Often top of code's execution doesn't make any sense - for example you might see your ViewController method somewhere, but the top few function calls (those where the code crashed) are located in methods/classes which you never call in your code. In most cases that is a sign of wrong memory de-/allocation. What might happened is that you forgot to retain some of your objects, it has already been released, but you are still keeping reference (a pointer) to its memory. Because that memory has been in fact freed, another object took its place later on, usually some Apple's internal object you've never heard about. Later on your code tries to message your poor object but it sends a message to something completely different. BUMMER! That's how you get those crashes and strange stack traces.
To fix the kind of problem I've just described you can use Instruments and its Zombies instrument. Unfortunately you can't start Zombies from within Xcode, you need to start Instruments standalone, then choose the Zombies under iPhone Simulator/Memory, then Choose Target from the toolbar, you should see your application in there, or be able to navigate to it on filesystem.
What Zombies instrument does is that it never really frees memory after objects are deallocated. Instead, it will mutate those objects into NSZombie class. That class intercepts all calls to itself, and informs you when some code is trying to send a message to it.
This is how such Instruments session looks like (this is the same crash as seen in debugger above):
In the table you can see that we're trying to message UIScrollView that has already been deallocated. You can as well see the whole history of retain/release calls to this particular object. That way you can find a missing retain or wrong release/autorelease.
Remember - Zombies Instruments can only be used with Simulator, because there's not enough memory on the real device to keep all those memory blocks.
Hopefully I could help you with further analysis of your problem.

Slow startup on iPhone

I'm debugging slow startup of an iPhone application (Xcode, Objective C++). It's a tabbar-based app with three tabs. All three tabs are loaded in a single NIB - about 20 objects total.
The first round of significant initialization takes place in the viewDidLoad handler of the first tab's view controller. However, it takes about 1 second between main() and that method's start time - about 2/3 of the total loading time. Question - what is going on during that time, and how do I investigate that (short of stepping through the disassembly)? To the best of my knowledge, there's no my code in between the two moments - the delay happens entirely in the system code.
Maybe some kind of Instrument that can give me per-function time profile?
The bundle is ~4 MB total, but I'm loading the biggest file (~3.5 MB) later than that, in the applicationDidFinishLaunching handler. Removing that file from the bundle and commenting out the relevant code does nothing for that 1-second delay.
UPDATE: there was debug interference after all. If I run it on the device while watching the console, the startup time us considerably shorter, and the proportion of delay - system code to my code - is skewed, too. But still, there's a noticeable delay between main and viewDidLoad, and it's about 50% of total loading time.
By the way, of all ways of loading a largish file from the bundle completely into memory, the fastest one was direct memory-mapping (using POSIX mmap()).
If you really are curious about what's executing during startup, and the relative times each method takes to run, you can create a custom DTrace script to track this. I describe how to do that towards the end of this article. This script will show you every method executed by your application in order from startup to the end of -applicationDidFinishLaunching:, along with the time spent in that method. You can run this script as a custom instrument in Instruments or as a standalone script (the standalone script tends to be more accurate on a system under load).
The major caveat of this approach is that it will only run in the Simulator, given the current lack of support for DTrace in the iPhone OS itself. However, you should be able to extract the order in which things are executed in your application's startup, as well as relative times that your application spends in each method. This will even show behind-the-scenes private API calls being made as your application starts, which might provide some additional clues as to what's going on.
For additional startup tuning suggestions, I'd recommend reading James Thomson's article "How To Make Your iPhone App Launch Faster".
There are two things that could be going on here. If you're debugging your app from within XCode, there's a good chance that the application is waiting at startup to attach to the GDB debugger. In my experience, that takes about a second. Try running your app without saying "Build and Go" in XCode and see if the results are any different. (Just click it from the home screen instead)
Your NIB file might also be the issue. 20 objects isn't too many, but you might consider breaking each tab into a separate NIB file if all else fails. The contents of NIB files referenced from your primary NIB file are lazily loaded, so the app will not load views for the two tabs that are invisible until they are selected. That might give you a performance boost at startup, though I don't think it could account for a full second.
Apple's got some great performance analysis tools in the iPhone SDK, but they're a bit hard to find. In the Run menu, select "Run with Performance Tool" -> "CPU Sampler." That will launch a separate application called Instruments which allows you to do all sorts of great runtime analysis. When you have the CPU instrument selected, the bottom half of the Instruments window provides a breakdown of what was taking CPU time in your app. You can double click functions to dive into them, and get a line by line breakdown of % of cycles used. It should give you more information about what specifically is causing your problem.
I'd recommend splitting up your app into three NIBS; the tab bar and the tab bar controller displayed on launch in the first, then lazily load the other two the first time the user switches to them.
I believe you can use the File >> Decompose Interface function in Interface Builder to accomplish this.
If you found your xib files are too large, I advice you build your UI with pure code.
large xib files will surely slow your startup time, and also slow your app when you first use an object in your xib.
I don't use xib in my projects, cause when somebody changed the xib in svn, you can hardly find what is changed. That's to say, xib is not going well with SVN.