Phonegap app performance vs native app performance - iphone

we are looking at getting a barcode scanning application built. We are considering using PhoneGap but our only worry is speed.
All the application will do is just scan a barcode and check a server to see if it's valid or not. The application uses the camera very intensely to scan the barcode via an image.
My main question is, will scanning via phonegap be just as fast as a native app? Speed is really important as the user will have to scan multiple barcodes very quickly.

Phonegap uses the same native APIs, it just abstracts them so that you can write your application in html and javascript. The time to take a picture or any other native process is less important than the time the user perceives. This is the portion of the native execution time that you need to expose to the user + Abstraction API time + UI responsiveness.
There is always an overhead from an abstraction but I think that's negligible in an app like this (in phones newer than BB OS5). The current issues originate from the hardware rendering the HTML and the browser software installed on the device.
A lot of BlackBerry phones don't use webkit (OS5 and below) and the the browsers they do use can seem very sluggish while rendering webapps. BB OS versions less than 5 don't have a production worthy way of communicating between the native and javascript layers, the hack that's often seen is to set and poll for changes in cookies. Android has always had a good design for JavaScript to native interaction afaik.
BlackBerry phones and many lower end Android phones don't have GPU's, or some Android phones that do have GPU's don't compile webkit for the GPU! Without this your UI app may
have that sluggish feel, pages/buttons take that bit longer to respond which is very noticeable when you're trying to whiz through menus.
This has improved a lot since phonegap was released. UI lag should continue to decrease to a point where even new low end phones are production ready for webapps. But from my experiences we've not yet reached that point in 2011.

The phone's built-in software is what does the scanning and camera action. PhoneGap will only trigger the event and help transfer the data but the phone does all the work.

As others noted the html5-based UI may feel sluggish. Maybe it's not an issue; you just have to try it and see. For scanning a barcode and uploading to a server the Phonegap overhead might not be signficant.

I have developed a smartphone app where barcode scanning is an alternative to the primary function of scanning an image which is recognized by picture matching technology. I use PhoneGap. I have not compared this to native app performance. I am able to say that for my basic UI (it is a web app for the smartphone), my web pages are rendered fast enough not to be an issue. This performance has been observed on a 600MHz smartphone CPU (LG Optimus One running Android 2.2.1).
The picture matching as well as barcode scanning is done on a server backend, not on the smartphone itself. The issue becomes one of networking speed from smartphone over WiFi or service provider network, over the Internet and onto the server - then there is the response from server back to smartphone. The processing speed of picture matching or barcode scanning has to be less than a second (ideally half a second) so that by the time networking delay is added, it is still a 1-2 second response time for the user.
The image files that I am transferring from smartphone to server is targeted to be around 40KB. At a typical 54Mbps WiFi network or the going rate of around 40Mbps in HSPA+ service provider networks, I find the performance of my app to be suitable. Even with a fair signal WiFi speed of 15Mbps, end-user response is acceptable between 1-2 seconds.
The pace of smartphone development (dual core processors) and service provider networks (4G HSPA+) will only take the industry higher. It is a tremendous opportunity for apps development moving forward.
Side Topic:
I am using Zbar code on the server for barcode scanning and I am hunting for better alternatives. The challenge with ISBN barcode scanning from smartphones having non-zoom, non-macro lens is that the typical barcode size is too small for "simple" barcode scanning algorithms to work properly. I'd like to hear about alternatives and people's experience with barcode scanning. I would be looking for code that I can deploy in my server backend, as opposed to running smartphone resident barcode scanning.

Related

How can I get real-time heart rate data in a progressive web app for phones?

I'm building a progressive web application (target is smart phones for now). The app needs to be able to access heart rate and heart rate variability, ideally in real-time. While it seems totally asinine, I'm open to using REST calls to some remote server if that is the only way. I'm also fine with restricting the app to only work with certain hardware if necessary. In this case, the ideal hardware would be some sort of earbud that uses optics to scan for heart rate, but at this point, I'm open...
The best that I have thought up is to find a heart rate monitor that converts the direct signal into audio and use the microphone web API. That seems like a lot more work than ideal, so I'm hoping someone has a better idea. Any ideas are welcome. Please, no one downvote anyone if it doesn't solve all my constraints. I've been working on this for a bit and I'm not sure that there is a clean and perfect solution yet. Thanks in advance!
If the sensor can speak Bluetooth, the Web Bluetooth API can perhaps help: https://developer.mozilla.org/en-US/docs/Web/API/Web_Bluetooth_API
https://developers.google.com/web/updates/2015/07/interact-with-ble-devices-on-the-web
How about use a Web Bluetooth that lets you control any Bluetooth Low Energy device like heart rate monitors. It will read the Service Location Characteristics (which tells your where the sensor is placed - which body part) and subscribe to notifications from the Heart Rate Characteristics, meaning you will get an event whenever the device performs a new measurement. Then use a service worker that will define the behavior of the app to mimic native app capabilities like offline support and notifications.
It's like a Physical Web that you can send a link to your website from a Bluetooth beacon to a user's device and with PWA, that link can be to your web app that looks, feels and functions like a native app. Then with Web Bluetooth, you can then speack to the device. Visit this blog post for more details.

Google Cardboard controlling PC

When I saw the Google Cardboard for Unity, I assumed this meant that you would be able to make a Unity PC game and use your phone as a screen/controller. All I can see is it wanting me to make an android app which is all well and good, but it doesn't allow for input from the keyboard.
Is there a way to stream the Unity PC project to the device and retrieve input (i.e. Headtracking, NFC magnet)?
The problem with such a solution is latency. In VR latency is a big deal. The overall latency from input to photons reaching your eyes should be 20ms or lower. Regular games have 30-60 ms latency by themselves. Add to that the gyro latency, the phone display latency... If you want to add another 25ms or more ping to your VR experience, that's gonna be painful and may even make you sick. If you want to read more on why latency is such a big deal in VR, Michael Abrash wrote an excellend blogpost about it: post on latency
If you want to necessairly use a keyboard for navigation, consider using a bluetooth keyboard that can be used with android devices. Also keep in mind that with the current technology, especially without a dedicated headset, really dynamic vr experiences probably won't work very well and can make some people uncomfortable or sick. For a good read on designign virtual reality experiences, please refer to this guide from the Oculus Rift: http://static.oculus.com/sdk-downloads/documents/Oculus_Best_Practices_Guide.pdf
There's nothing in the Cardboard SDK for talking with a PC-hosted Unity game. You could adapt the code from the Unity Remote 4 project:
https://www.assetstore.unity3d.com/en/#!/content/18106
We are developing the app what you want except it uses GearVR instead of Cardboard. Please check the link below.
http://challengepost.com/software/airvr
Streaming from your PC to your phone's Cardboard is possible using third-party apps, such as Trinus VR (the client app on your phone) and Vireio (the streaming app on your computer). The two apps will then communicate via your home network (Wi-Fi or other) to stream the images.

Testing iOS testing on real devices vs. Simulator

I am new to iPhone/iPad development and I am close to finishing up my first app and I am looking for some general advice.
I know it is important to test on actual devices and not just the simulator. What are the types of things people generally encounter when testing on a deal device that they don't see in the simulator?
The app itself is mainly a way to track online deals and that type of thing. It doesn't need anything special in term of using things like the camera or GPS.
It's just general usage testing. The device performs in an entirely different environment than your computer, and it's the best way to make sure if you push your app out to devices, that nothing unexpected will happen. For example, the phone/pad may have limited data coverage, low memory situations, incoming calls etc.. These situations are a lot more common on devices, then when people emulate it though the simulator.
On a hardware point of view, the device uses a different processor architecture than your Mac, which also needs to be accounted for (not as much as other cases, but you need to cover your bases). The Mac also cannot reliably emulate RAM, Disc Space, Processor Speed etc...hence testing on the device is useful here also.
Obviously there are some features you can only test on devices, such as Camera, GPS (and not so obviously iPod library usage), and if your app uses them it'd be careless not to test on a device.
Overall if you're intending to release your application to the App Store, or to devices at least, it's worth testing on the device itself. Only then can you be sure that it will act and perform as expected on the platform you intend to target. The simulator is only a simulator after all, not the real thing!
First of all: the user experience is very different.
The mouse based interaction is very different from a touch interaction. focusing at a monitor feels very different then looking on a device on the palm of your hand.
Also the experience of animations running on the simulator and the real device can be very different.
And the usage in the simulator won't tell you anything about the battery consumptions to be witnessed on the real device.
My opinion: every app that will be shipped to the App Store or customer for testing should be tested several different real devices. No excuses.
Simulator runs a lot slower than the real device.
Real device could run out of memory when Simulator doesn't or vice versa.
In app purchases, if you have included them
Orientations (not that
they are unavailable on simulator, but it is easy to forget it
there!)
App life cycle testing - bringing your app to foreground and
background.
Network access - can matter when you access the network from device through wireless or cellular network vs LAN/wifi on your mac. There is a huge testing to be done under the umbrella called Reachability if your app uses any of the resources across the net. You are bound to provide an alert if network is unreachable before using any such resources, as per app store requirements.

Ocropus Engine on iPhone and/or Android

What is the best way to get ocropus running on iOS and/or android?
I'm interested in using Ocropus to digitize some content on mobile devices. I'm largely interested in using a trained 'language' model to make predictions on the device. Training will occur offline and off device. I know a few people have got tesserect running on mobile devices, but I'm unable to find much information on doing the same with Ocropus. I'd greatly appreciate a slice of your collective wisdom in an effort to avoid wasting days taking the wrong path.
Would it be easier to just prototype the algorithm using the scripts, then grab the specific c++ code of interest and include it directly in my application. Or best to compile as a static/dynamic library?
It would be better setting up a simple web service that uses Ocropus or any OCR library for that matter. Then have your smartphone application make requests to the web service. OCR is a CPU intensive process, so it's appropriate to move it off of the phone.

iPhone Platform Constraints

I'm analyzing the iPhone platform (for a paper). I've made a list with issues,
developers/architects have to consider, before working with the iPhone SDK.
The questions aims at people, who want to release iPhone software. What constraints restrict them in comparsion to other mobile platforms, such as Android, Windows Mobile, Symbian, etc.
Feel free to add hurdles, which I may have forgotten to list.
Thanks.
iPhone platform constrains/hurdles:
No physical keyboard
No replacable battery
One Application A Time
Sandbox File System
Restricted Deployment Cycle (Dev program...)
App Store Approval Process
No replaceable battery is no concern for software developers whatsoever, as there are no APIs for battery manipulation or replacement. This is no more of a concern for iPhone developers than "access to electricity" is a practical concern for developing for other platforms.
Others I would add:
Requires a Mac. Fairly obvious one, not a terrible barrier to entry compared to other closed systems like game consoles, but still higher than some other phone/mobile platforms like Windows Mobile, J2ME or Brew.
Costs money to debug on real hardware. You can only run and debug in the simulator unless you buy a $99 developer program subscription, which lets you pair iPhone and iTouch hardware with your Xcode install and run apps on it.
Objective-C as the programming language. It really shouldn't deter anyone but a lot of developers get really grumpy about learning anything new or different.
Must accommodate interruptions (i.e., the user may get a call at any time and the app must be prepared to save any state necessary and quit within a fixed time limit).
Not specific to iPhone but like any platform, you are constrained by the CPU/GPU/RAM the device has, and in the iPhone's case this is obviously quite a bit less hardware than people with a desktop background are accustomed to.
Restrictive wording in EULA regarding embedded scripting languages. It is apparently forbidden to execute any scripts via an iPhone application, which is quite a bummer as embedded scripting languages are quite common these days and very useful.
Limited CPU speed
Limited RAM
Objective-C is effectively the main
dev language
Power management concerns (I'm not sure if lack
of a replaceable battery is a concern
of mine). High CPU utilization can be a drain on the battery (and cause extra heat). In other words there are CPU intensive things I choose not to do, in order not to drain the battery too fast.
Only one IDE
inability to access other apps' data
easily