I'm looking for a way to differentiate at runtime between devices equipped with the new ARM processor (such as iPhone 3GS and some iPods 3G) and devices equipped with the old ARM processors.
I know I can use uname() to determine the device model, but as only some of the iPod touches 3G received a boost in their ARM processor, this isn't enough.
Therefore, I'm looking for one of these:
A way of detecting processor model - I suppose there's none.
A way of determining whether ARM neon instructions are supported - from this I could derive an answer.
A way of determining the devices total storage size - combining this with the already known device model could hackishly lead me to the answer.
< ENTER RANDOM IDEA >
Thanks in advance :)
Not exactly what you're asking, but one easy solution is to build your application fat, so that it contains executable code for both ARMv6 and ARMv7. If you do this, the appropriate code will run on the processor automatically, and you don't need to do any runtime checking. Effectively, you're letting the loader do the runtime detection for you.
To do this, change the Architectures setting in your XCode project from "Standard (armv6)" to "Optimized (armv6 armv7)"
Then, in your implementation, you do this:
#if defined __ARM_NEON__
// Code that uses NEON goes here
#else // defined __ARM_NEON__
// Fallback code without NEON goes here
#endif // defined __ARM_NEON__
There is a similar macro that you can use to check for (non NEON) ARMv7 features, which I can't remember off the top of my head.
If you really want to do runtime dispatch, take a look at the sysctlbyname function in libc. Specifically, I think that looking up the HW_MACHINE_ARCH parameter may prove useful to you.
One workaround I can think of, is detecting if OpenGL ES 2.0 is available, since the newer processors enable that.
There's an article at mobileorchard on how to do it.
EDIT: I have withdrawn this answer, as it left out a glaring hole I realized later: what to do when we get an unknown subtype on some future hardware? THIS WAS NOT FUTURE-PROOF. Also, the uncertainty of the documented status of that API doesn't help, given Apple's zero tolerance on usage of undocumented APIs.
You should use Stephen Canon's answer and build your application fat. Reliable, future-proof runtime detection is not feasible at this time (to my dismay, I assure you).
I know this is crummy, but the best that comes into my mind is detect if the device supports video recording. Currently only the ARM7 based iPhone and iPod devices support it, hence its a legit way, I guess.
To do so, use UIImagePickerController's availableMediaTypesForSourceType in conjunction with isSourceTypeAvailable on kUTTypeMovie.
Related
Google has not helped me at all with the question of how to program this "Neural Engine" on the latest iOS devices, and especially what happens if a NE app gets downloaded on older devices without the NE. I had to "read between the lines" to conclude that you have to use Core ML 2 and leave iOS to execute your code in the best hardware, best way it knows how. Which leaves you exposed to not particularly optimized code on the NE, and perhaps overstretching the devices without NE. I was also left with the impression that there is no middle ground by using the Metal engine, but really I don't even know right now what gave me that impression. Has anyone figured out which is which, which is the definitive way to exploit the NE and if it is wiser to perhaps disallow the older devices, or will the NE "emulation" always work?
In general, it is best practice to find (buy, save, beg, borrow, testflight enroll) the oldest, slowest iOS device(s) you choose to allow under your OS Deployment Target Xcode setting and the Required Device Capabilities as specified in the app's plist. Then you can benchmark your code, and make a judgement call as to whether this meets your product's performance goals, or not. Your potential customer's possible exposure to varying performance levels then becomes a business decision.
This is true for any ML, GPU/Metal, or numerically CPU intensive app (not just ML 2 or NE related).
I am wondering how good of an idea it is to start creating iphone applications without actually having an iphone?
I found that there are simulators for iphone...
Are they good enough or is it likely that I might encounter some problems down the road when creating an app?
-I don't have an idea of an app yet.
-I don't have a real iphone. There ARE some handsets in my job's office, though, but I don't want to use them too much.
Thanks! And if you think that it is an okay idea to create apps and test them on simulators, which ones would you recommend?
Bad idea.
You can get a lot of work done but you really, really need an actual device to do your final testing.
Remember that it's a simulator and not an emulator. There are significant differences in performance. Lots of things work fine on your Mac but poorly on an real device. There are, perhaps surprisingly, a number of situations where the reverse is true (i.e., faster on the iPhone). You get better at figuring out these differences after a while, but without working on a real device you'll never know.
If cost is the main factor, you don't actually need an iPhone; you could work with an iPod touch instead.
xCode has an Emulator in itself, so why not? All you need is a Mac-PC
check http://developer.apple.com/library/ios/#documentation/Xcode/Conceptual/ios_development_workflow/25-Using_iOS_Simulator/ios_simulator_application.html#//apple_ref/doc/uid/TP40007959-CH9-SW1
Do you have a mac?
iOS SDK has a really good emulator where you can try your apps, and i think if works on it, it should work on a real iPhone.
Consider that the simulator is good enough, to create your app, but it all depends on what you use. For example if you use the accelerometer or the Bluetooth simulator is not suitable to your case. Overall, in many respects is very limited, but to begin with can go just fine.
To use the devices of which you speak, you must have a valid license from the developer and associate devices to your license, otherwise you will not use them (though not with the jailbreak)
the iPhone simulator does not support everything, which is supported by a real device. Some things are impossible to test on a simulator, including but not limited to features/API associated with: calendars, camera, gyroscope, accelerometer, music, ...
The iPhone simulator that comes with xCode is fine for the development of most applications. In my experience the area where it falls down is when you are dealing with stuff that runs in real time such as audio and graphics. The processor on the computer is obviously faster and if you are only testing on the simulator you might not realise that what you are trying to achieve is outside the capabilities of the device.
This could have impact on your frame rate (as you will get a higher frame rate on the computer) and in depth audio with lots of DSP faces the same issues.
If you are developing Line Of Business apps the simulator is probably all you need. Of course you need to test on a device before release, but most of the bug testing and debugging is possible in the simulated environment.
MonoTouch has automatic garbage collection on the iPhone. Couldn't someone prematurely implement garbage collection for Objective-C on iOS? I'm not the guy to do it, but I'm curious as to why or whether this is impossible.
I know that projects like this exist: what does it take to use them on iOS? Since they are in C/C++ anyway, and Objective-C contains those languages as subsets, but yet, those languages are actually aware of the system architecture... I'm out of my depths here...
While we're here, if anyone knows of any attempts to implement a GC on iOS, links would be helpful...
I don't think it's possible. The problem is that Objective-C is used inside the system library too. In OS X where garbage-collected Objective-C is supported, there are in fact three modes when you compile a code:
the function compiled can be only called from non-GC environment.
the function compiled can be only called from GC environment.
the function compiled can be called both from GC and non-GC environment.
See the discussion here, for example. The point is that the system library needs to be in the third mode in order for the OS to support both non-GC and GC apps. And in OS X, the libraries come in this hybrid mode. In iOS, I guess it comes in the mode 1. (I don't know for sure because I haven't jailbroken my phone, though.)
If you have complete control over both the system library and your app, it's possible to make them all garbage collected, but unfortunately we're not in that stage yet.
I'm sure we'll have GC in iOS in two years.
Nothing stops you from building your own garbage collector for your app. Or importing another project that will handle it. Again, for your app.
more discussion:
Is garbage collection supported for iPhone applications?
I would like to check if the user is using an iPhone 4 or not. How can I do that ?
Thanks !
Sebastian
Apple specifically recommends against this, instead preferring that you check for individual features and act according to these. This makes your life a lot easier when Apple releases new hardware; if for instance Apple releases an iPod Touch with a camera, and you need a camera for your app, your users wont be upset that it tells them "No camera found" when it does have one, all because it reports as not an iPhone. Here is one way to require all the differentiating hardware features. Do not use these for enabling/disabling features that are supported but not required: this can be determined at runtime through the APIs used to interact with that feature.
UIDevice (see here, also the docs) can help you determine if it is an iPhone, but again, don't do this.
To detect the difference between the iPod Touch and the iPhone, we use
if(![UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera])
There might be something similar to check for a the forward camera.
If I create an App for the iPhone (OS 3) will it run without modification on an iPod Touch or will I need to create a separate binary? If it is the same runtime, does it just have stubs for the iPhone only features or do you have to check feature by feature using UIDevice to ensure the particular class/method is supported on the device to avoid a crash?
Sorry for the elementary questions, can't find a simple explanation of this anywhere.
Cheers
Dave
EDITED: Based on discussions below:
How can you check if a device supports making calls? At the moment I am assuming if it is an iPod Touch it can't. Is there a way of finding out what shared applications/URL schemes are supported by a device?
You shouldn't really try to guess what the device is. You're far more future-proof if you test for the specific functionality you're trying to use. After all, in the future there might be iPods with cameras. Or compasses (which are on some iPhones but not others).
Since it sounds like all you want to do is see if you can open a URL, why not use -[UIApplication canOpenURL:] ? (This would presumably work on iPod touches that had applications that could handle VOIP -- I don't know if any such exist, but I think it's an example of why you need to test for functionality and not make assumptions based on hardware or OS version.)
The app will run on an iPod touch, no need to compile a separate version. Features that require an iPhone (e.g. camera) will not work, obviously.
What such features do you intend to use? You may provide alternatives for iPod users or alert them that e.g. no camera is available.
This question adresses how to check if a microphone is present: Detecting iPhone iPod touch accessories