iPhone Platform Constraints - iphone

I'm analyzing the iPhone platform (for a paper). I've made a list with issues,
developers/architects have to consider, before working with the iPhone SDK.
The questions aims at people, who want to release iPhone software. What constraints restrict them in comparsion to other mobile platforms, such as Android, Windows Mobile, Symbian, etc.
Feel free to add hurdles, which I may have forgotten to list.
Thanks.
iPhone platform constrains/hurdles:
No physical keyboard
No replacable battery
One Application A Time
Sandbox File System
Restricted Deployment Cycle (Dev program...)
App Store Approval Process

No replaceable battery is no concern for software developers whatsoever, as there are no APIs for battery manipulation or replacement. This is no more of a concern for iPhone developers than "access to electricity" is a practical concern for developing for other platforms.
Others I would add:
Requires a Mac. Fairly obvious one, not a terrible barrier to entry compared to other closed systems like game consoles, but still higher than some other phone/mobile platforms like Windows Mobile, J2ME or Brew.
Costs money to debug on real hardware. You can only run and debug in the simulator unless you buy a $99 developer program subscription, which lets you pair iPhone and iTouch hardware with your Xcode install and run apps on it.
Objective-C as the programming language. It really shouldn't deter anyone but a lot of developers get really grumpy about learning anything new or different.
Must accommodate interruptions (i.e., the user may get a call at any time and the app must be prepared to save any state necessary and quit within a fixed time limit).
Not specific to iPhone but like any platform, you are constrained by the CPU/GPU/RAM the device has, and in the iPhone's case this is obviously quite a bit less hardware than people with a desktop background are accustomed to.
Restrictive wording in EULA regarding embedded scripting languages. It is apparently forbidden to execute any scripts via an iPhone application, which is quite a bummer as embedded scripting languages are quite common these days and very useful.

Limited CPU speed
Limited RAM
Objective-C is effectively the main
dev language
Power management concerns (I'm not sure if lack
of a replaceable battery is a concern
of mine). High CPU utilization can be a drain on the battery (and cause extra heat). In other words there are CPU intensive things I choose not to do, in order not to drain the battery too fast.
Only one IDE
inability to access other apps' data
easily

Related

Is Apples hardware not as customizable as other machines that run on Windows because their OS is built more specifically?

Preface: I'm a student about to take a course in Operating Systems. I thought I'd do some prep by watching a series on YouTube first.
Throughout the course of watching about 10 of the videos in this series, I have learned that roughly the operating system's purpose is to serve as an interface for System Programs/Applications/I/O devices/etc. to communicate with the system's hardware.
This got me thinking about how Apple's hardware is not modularly customizable. How Apple users can't swap out hardware components as easily as users on a system running Windows OS. I began to think that most likely this is because the OS Apple implements is built very specifically with the original hardware their products come with to run as efficiently as possible.
Is there any truth to this logic? I'm basically just trying to apply what I've been learning to a "real-life" example.
This got me thinking about how Apple's hardware is not modularly customizable.
It is quite customizable but not with their own hardware. The "hardware" that Apple is shipping is mostly x86-64 CPUs with a recent chipset like an xHCI, an AHCI and a modern PCI network card, etc. This is unless you have an M1 computer which is their most recent product based on an ARM architecture. They licensed the architecture from ARM ltd. and are manufacturing their own CPU. I think this is a very good and open decision from Apple unlike several bad ideas they had about their phones like removing the 3.5mm jack or using a lightning plug instead of USB-C.
If you do have an x86-64 CPU, the OS Apple built called MacOS can run on the computer. It is simply forbidden by Apple's license of use. The fact that their hardware is less customizable has mostly to do with screws and the way that the case is made than with the OS itself.

For what programmatic reason do IoT-programmed devices always require cloud/server access?

I live in an area where net access is mobile or nothing. While I can occasionally get access by tethering a mobile to that network, it isn't often connected, and when it isn't connected, no local device will function on its own, no matter which protocol it uses. Why isn't there any kind of server/cloud resiliency built in where devices can communicate in a peer fashion like Apple's Bonjour (Rendezvous? I can't remember)? If I have an Echo device, I should be able to switch it on through an Alexa interface. I'm OK without speech processing which requires interpretation of commands through an AWS or Google or Apple or whatever cloud, but being able to locally control a switch seems as though the interface could be smart enough to route locally. I guess I may have just answered my question. It seems as though routes could be internally stored so as to not to definitely require a server. Can you imagine shipping a colony to Mars and all the IoT devices stop working? If you ask me, they should not require a branch variation or special programming in order to function.
From the experience of having sat down and built a few, there are a some key reasons why viable IoT gadget products for the general market typically end up having to have a cloud-mediated mode, no matter what was envisioned when the design effort originally commenced:
General consumers (at least think) they want the option to control things when outside the home
Often even at home, a mobile phone may be on the mobile network not wifi, meaning that even if the user is physically inside their home, in network terms, they are not.
Firmware updates, dynamic content, etc are easier when they don't have to be relayed through a mobile phone or PC, especially a mobile that might sometimes have to jump networks partway through the process.
Ironically, having once set out to build an IoT product that could work entirely offline, the further the project progressed, the more and more difficulties that approach presented for general users, and the more the cloud path that was added as an option, started to look preferable in terms of how things should work all the time so that it could become the exclusive focus of development efforts.
My conclusion is that it's very hard to build an offline IoT gadget. Not only the developer, but also the users and marketing people need to understand and accept what sorts of difficulties and limitations that can mean.
So where does it happen? In the situations where the "users" are the "developers" - eg. open source. If you look around a bit, you'll find plenty of gadgets either built form scratch, or more commonly reverse engineered so they can run a custom firmware. Want a local RESTful API? Done! Want could relay via MQTT over SSL to your own broker? Done!
When you control the code, you control the mode.
But with products for the general market, most customers want things to work, not a lengthy technical explanation of why the details of their network setup mean they cannot.

Testing iOS testing on real devices vs. Simulator

I am new to iPhone/iPad development and I am close to finishing up my first app and I am looking for some general advice.
I know it is important to test on actual devices and not just the simulator. What are the types of things people generally encounter when testing on a deal device that they don't see in the simulator?
The app itself is mainly a way to track online deals and that type of thing. It doesn't need anything special in term of using things like the camera or GPS.
It's just general usage testing. The device performs in an entirely different environment than your computer, and it's the best way to make sure if you push your app out to devices, that nothing unexpected will happen. For example, the phone/pad may have limited data coverage, low memory situations, incoming calls etc.. These situations are a lot more common on devices, then when people emulate it though the simulator.
On a hardware point of view, the device uses a different processor architecture than your Mac, which also needs to be accounted for (not as much as other cases, but you need to cover your bases). The Mac also cannot reliably emulate RAM, Disc Space, Processor Speed etc...hence testing on the device is useful here also.
Obviously there are some features you can only test on devices, such as Camera, GPS (and not so obviously iPod library usage), and if your app uses them it'd be careless not to test on a device.
Overall if you're intending to release your application to the App Store, or to devices at least, it's worth testing on the device itself. Only then can you be sure that it will act and perform as expected on the platform you intend to target. The simulator is only a simulator after all, not the real thing!
First of all: the user experience is very different.
The mouse based interaction is very different from a touch interaction. focusing at a monitor feels very different then looking on a device on the palm of your hand.
Also the experience of animations running on the simulator and the real device can be very different.
And the usage in the simulator won't tell you anything about the battery consumptions to be witnessed on the real device.
My opinion: every app that will be shipped to the App Store or customer for testing should be tested several different real devices. No excuses.
Simulator runs a lot slower than the real device.
Real device could run out of memory when Simulator doesn't or vice versa.
In app purchases, if you have included them
Orientations (not that
they are unavailable on simulator, but it is easy to forget it
there!)
App life cycle testing - bringing your app to foreground and
background.
Network access - can matter when you access the network from device through wireless or cellular network vs LAN/wifi on your mac. There is a huge testing to be done under the umbrella called Reachability if your app uses any of the resources across the net. You are bound to provide an alert if network is unreachable before using any such resources, as per app store requirements.

Phonegap app performance vs native app performance

we are looking at getting a barcode scanning application built. We are considering using PhoneGap but our only worry is speed.
All the application will do is just scan a barcode and check a server to see if it's valid or not. The application uses the camera very intensely to scan the barcode via an image.
My main question is, will scanning via phonegap be just as fast as a native app? Speed is really important as the user will have to scan multiple barcodes very quickly.
Phonegap uses the same native APIs, it just abstracts them so that you can write your application in html and javascript. The time to take a picture or any other native process is less important than the time the user perceives. This is the portion of the native execution time that you need to expose to the user + Abstraction API time + UI responsiveness.
There is always an overhead from an abstraction but I think that's negligible in an app like this (in phones newer than BB OS5). The current issues originate from the hardware rendering the HTML and the browser software installed on the device.
A lot of BlackBerry phones don't use webkit (OS5 and below) and the the browsers they do use can seem very sluggish while rendering webapps. BB OS versions less than 5 don't have a production worthy way of communicating between the native and javascript layers, the hack that's often seen is to set and poll for changes in cookies. Android has always had a good design for JavaScript to native interaction afaik.
BlackBerry phones and many lower end Android phones don't have GPU's, or some Android phones that do have GPU's don't compile webkit for the GPU! Without this your UI app may
have that sluggish feel, pages/buttons take that bit longer to respond which is very noticeable when you're trying to whiz through menus.
This has improved a lot since phonegap was released. UI lag should continue to decrease to a point where even new low end phones are production ready for webapps. But from my experiences we've not yet reached that point in 2011.
The phone's built-in software is what does the scanning and camera action. PhoneGap will only trigger the event and help transfer the data but the phone does all the work.
As others noted the html5-based UI may feel sluggish. Maybe it's not an issue; you just have to try it and see. For scanning a barcode and uploading to a server the Phonegap overhead might not be signficant.
I have developed a smartphone app where barcode scanning is an alternative to the primary function of scanning an image which is recognized by picture matching technology. I use PhoneGap. I have not compared this to native app performance. I am able to say that for my basic UI (it is a web app for the smartphone), my web pages are rendered fast enough not to be an issue. This performance has been observed on a 600MHz smartphone CPU (LG Optimus One running Android 2.2.1).
The picture matching as well as barcode scanning is done on a server backend, not on the smartphone itself. The issue becomes one of networking speed from smartphone over WiFi or service provider network, over the Internet and onto the server - then there is the response from server back to smartphone. The processing speed of picture matching or barcode scanning has to be less than a second (ideally half a second) so that by the time networking delay is added, it is still a 1-2 second response time for the user.
The image files that I am transferring from smartphone to server is targeted to be around 40KB. At a typical 54Mbps WiFi network or the going rate of around 40Mbps in HSPA+ service provider networks, I find the performance of my app to be suitable. Even with a fair signal WiFi speed of 15Mbps, end-user response is acceptable between 1-2 seconds.
The pace of smartphone development (dual core processors) and service provider networks (4G HSPA+) will only take the industry higher. It is a tremendous opportunity for apps development moving forward.
Side Topic:
I am using Zbar code on the server for barcode scanning and I am hunting for better alternatives. The challenge with ISBN barcode scanning from smartphones having non-zoom, non-macro lens is that the typical barcode size is too small for "simple" barcode scanning algorithms to work properly. I'd like to hear about alternatives and people's experience with barcode scanning. I would be looking for code that I can deploy in my server backend, as opposed to running smartphone resident barcode scanning.

Is it possible to create an OS that can run all application?

Just a thought, if we have to make our application cross-platform, then is it possible to create a cross-application OS?
No.
Lets say you do go and invest - a monumental amount of - effort in building you're Uber-OS (that will run Mac apps, Linux apps, Unix apps, Android apps, i-phone apps, Nokia apps, Symbian apps, SAP apps, Windows Apps etc).
Then there's nothing stopping someone writing a new OS that you don't support.
P.S. And there are hundreds (if not thousands) of different hand held devices out there for scanning products, weights and mesures etc many of which have their own flavour of OS.
Technically yes as long as you limit the scope of all to all applications that run on major OSes.
It is theoretically possible to create an OS that could handle applications run on the 4-5 most common OSes but the amount of work involved would be monumental.
Every time a new feature was added to any of the OSes, you'd need to add it to your OS too - So as well as being almost impossible to build, you'd need a large enough dev team to stay ahead of 4-5 of the largest dev teams/groups in the world.
No but with virtualization you could have a single computer that can run any application.
First there is the practical impossibility of successfully following the evolution of an indefinite number of operating systems. Do we take embedded OS into account? How about one-shot OS for specific applications? How about proprietary OS with no access to documentation?
Then there is also the - very difficult, if not impossible - problem of merging the various paradigms used in the wild. Ideally you would want OS services like the clipboard, or networking or ... or ... to work in a uniform way and allow applications to cooperate as if targeted to the same OS.
(Let's not even think about the various hardware-dependent applications.)
After all this, you should also consider what the application development for your own OS would be like...
I wonder if this is a good case for Gödel's incompleteness theorems :-)
PS: That said, there are quite a few projects attempting to bridge the various OS gaps:
http://en.wikipedia.org/wiki/List_of_computer_system_emulators
http://en.wikipedia.org/wiki/List_of_emulators#Operating_System_emulators
What you can do is use virtual machines, such as VMWare's software, and emulate several operating systems on the same physical machine.
What do you define by an operating system that can run all applications?
Applications are mostly written in a higher level language and then translated into binary code that differs between machine architectures (like Intel and PowerPC) and operating systems (like Windows or Unix-based systems).
Java for example is only cross-platform because not the language itself is cross-platform (any high level language is), but because there exist Java virtual machines for different architectures and operating systems that abstract the heterogeneity of the underlying system.
It is definitely not theoretically impossible (nothing is except for some mathematical problems), but can you imagine what one would have to do in order to make such a thing work? You can basically run Linux programs in Windows with CygWin, you can also run Windows programs in Linux with Wine. All of those try to create a small operating system (e.g. the Windows core) into your other OS (e.g. Linux). This is probably not what you want.
To summarize, I can't imagine anyone really trying to do that. With all the money in the world, seriously. Better invest in writing native apps for the operating systems you want to support.