Ocropus Engine on iPhone and/or Android - iphone

What is the best way to get ocropus running on iOS and/or android?
I'm interested in using Ocropus to digitize some content on mobile devices. I'm largely interested in using a trained 'language' model to make predictions on the device. Training will occur offline and off device. I know a few people have got tesserect running on mobile devices, but I'm unable to find much information on doing the same with Ocropus. I'd greatly appreciate a slice of your collective wisdom in an effort to avoid wasting days taking the wrong path.
Would it be easier to just prototype the algorithm using the scripts, then grab the specific c++ code of interest and include it directly in my application. Or best to compile as a static/dynamic library?

It would be better setting up a simple web service that uses Ocropus or any OCR library for that matter. Then have your smartphone application make requests to the web service. OCR is a CPU intensive process, so it's appropriate to move it off of the phone.

Related

Movesense with Unity BLE plugin

I am trying to get the Movesense to work with a Unity BLE asset as originally I thought MS would be simple enough. I have managed to connect to it and subscribed to the "61353090-" starting service and the "34802252-" starting charasteristic. I think I even got some notifications. Now the problem is, that I am not receiving or able to decode any data from there.
I also ended up reading the example codes and found out the complex system the Movesense uses and the "whiteboard", which I am unfamiliar with. I cannot find anything sensible by googling, as whiteboard is a whiteboard :)
Now my questions are:
What should I do to progress? Do I need to write something to the "17816557"?
What is the "whiteboard" actually?
Would it actually be smarter to just make a Unity plugin for the Movesense?
Thank you
Your are quite right that the answer is in the "Whiteboard" component. Whiteboard is the embedded REST framework (Note: it is not over HTTP!) that Movesense uses to implement REST services within as well as inter device (e.g. over UART or BLE). As you can imagine it is not a simple component, so decoding the traffic without Amersports'/Suunto's help is quite a big challenge. The actual BLE layer is simple: one characteristic to each direction (write & notify), the complexity lies in what goes inside that data pipe.
However, if you are trying to use Unity to make a mobile app the situation is not so bad. There has been a prototype of Movesense mobile library integration for Unity (Android) that uses the existing Movesense mobile library. If you ask Movesense team (info (at) movesense.com) they might be able to help you further. For Windows (Unity or plain) there is nothing done (at least not yet) mainly because until Windows 10 there was no official BLE API for Windows.
Full disclosure: I work for the Movesense team

Are real time cross-platform applications between html and android possible

I'm currently investigating the scope of my project and have come across an issue with regards to the platform on which it can operate. The initial goal is to create a cross platform game across html, andriod and ios.
Is this type of application possible? It is important to note that it would require real time(low latency and consistent) interaction between the three platforms.
If so what are some tools I should take advantage of while developing.
We are doing this exact sort of thing using the 3rd party asset within the Unity UI's:
https://www.assetstore.unity3d.com/en/#/content/10872
and custom Socket.IO (http://socket.io/) server implementation. Works like a champ and is totally agnostic wether the client is Unity3D or just a browser.

Enabling Kinect In a Browser using NaCl

While working on a project with the kinect, I had an idea of integrating it onto a web browser directly from the device. I was wondering if someone has done this before or if there exists some form of information that can shed some light.
In More Detail:
I've been dissecting the Kinect Fusion application that is provided with the kinect and I was wondering what it would take to have a browser do a direct to device 3d scanning. I've discovered NaCl which claims that it can run native code, but I don't know how well it would run Microsoft native code (from the Kinect SDK version 2 //what I'm using.) also just looking at NaCl with no prior experience(with NaCl), I currently cannot imagine what steps to take to actually activate the kinect and have it start feeding the image render to the browser.
I know there exists some libraries that allow the kinect to work on other operating systems and was wondering if those libraries would allow me to have a general bitmapping to send the pp::graphics2d stuff for nacl(for the image display), for which I would then need to figure out how to actually present that onto the browser itself then have it run the native code in the background to create the 3d image then save it to the local computer.
I figured "let me tap the power of the stack." I'm afraid of an overflow, but you can't break eggs without making a few omelettes. Any information would be appreciated! If more information is needed, ask and I shall try my best to answer.
This is unlikely to work, as Native Client doesn't allow you to access OS-specific libraries.
Here's a library which uses NPAPI to allow a web page to communicate with the native kinect library: https://github.com/doug/depthjs. NPAPI will be deprecated soon, so this is not a long-term solution.
It looks like there is an open-source library for communicating with the kinect: https://github.com/OpenKinect/libfreenect. It would be a decent amount of work, but it looks like it should be possible to reverse-engineer the protocol from this library and perform the communication in JavaScript, via the chrome.usb apis.
Try EuphoriaNI. The library and some samples are available at http://kinectoncloud.com/. Currently, only the version for AS3 is posted on the site though. The version for the Web, of course, requires you to install a service on your computer (it's either that or a browser plug-in... and nobody likes those :)

Phonegap app performance vs native app performance

we are looking at getting a barcode scanning application built. We are considering using PhoneGap but our only worry is speed.
All the application will do is just scan a barcode and check a server to see if it's valid or not. The application uses the camera very intensely to scan the barcode via an image.
My main question is, will scanning via phonegap be just as fast as a native app? Speed is really important as the user will have to scan multiple barcodes very quickly.
Phonegap uses the same native APIs, it just abstracts them so that you can write your application in html and javascript. The time to take a picture or any other native process is less important than the time the user perceives. This is the portion of the native execution time that you need to expose to the user + Abstraction API time + UI responsiveness.
There is always an overhead from an abstraction but I think that's negligible in an app like this (in phones newer than BB OS5). The current issues originate from the hardware rendering the HTML and the browser software installed on the device.
A lot of BlackBerry phones don't use webkit (OS5 and below) and the the browsers they do use can seem very sluggish while rendering webapps. BB OS versions less than 5 don't have a production worthy way of communicating between the native and javascript layers, the hack that's often seen is to set and poll for changes in cookies. Android has always had a good design for JavaScript to native interaction afaik.
BlackBerry phones and many lower end Android phones don't have GPU's, or some Android phones that do have GPU's don't compile webkit for the GPU! Without this your UI app may
have that sluggish feel, pages/buttons take that bit longer to respond which is very noticeable when you're trying to whiz through menus.
This has improved a lot since phonegap was released. UI lag should continue to decrease to a point where even new low end phones are production ready for webapps. But from my experiences we've not yet reached that point in 2011.
The phone's built-in software is what does the scanning and camera action. PhoneGap will only trigger the event and help transfer the data but the phone does all the work.
As others noted the html5-based UI may feel sluggish. Maybe it's not an issue; you just have to try it and see. For scanning a barcode and uploading to a server the Phonegap overhead might not be signficant.
I have developed a smartphone app where barcode scanning is an alternative to the primary function of scanning an image which is recognized by picture matching technology. I use PhoneGap. I have not compared this to native app performance. I am able to say that for my basic UI (it is a web app for the smartphone), my web pages are rendered fast enough not to be an issue. This performance has been observed on a 600MHz smartphone CPU (LG Optimus One running Android 2.2.1).
The picture matching as well as barcode scanning is done on a server backend, not on the smartphone itself. The issue becomes one of networking speed from smartphone over WiFi or service provider network, over the Internet and onto the server - then there is the response from server back to smartphone. The processing speed of picture matching or barcode scanning has to be less than a second (ideally half a second) so that by the time networking delay is added, it is still a 1-2 second response time for the user.
The image files that I am transferring from smartphone to server is targeted to be around 40KB. At a typical 54Mbps WiFi network or the going rate of around 40Mbps in HSPA+ service provider networks, I find the performance of my app to be suitable. Even with a fair signal WiFi speed of 15Mbps, end-user response is acceptable between 1-2 seconds.
The pace of smartphone development (dual core processors) and service provider networks (4G HSPA+) will only take the industry higher. It is a tremendous opportunity for apps development moving forward.
Side Topic:
I am using Zbar code on the server for barcode scanning and I am hunting for better alternatives. The challenge with ISBN barcode scanning from smartphones having non-zoom, non-macro lens is that the typical barcode size is too small for "simple" barcode scanning algorithms to work properly. I'd like to hear about alternatives and people's experience with barcode scanning. I would be looking for code that I can deploy in my server backend, as opposed to running smartphone resident barcode scanning.

Semantic stuff (RDF, OWL) on mobile phones - is it possible?

I'm thinking about using semantic (web) technologies like RDF and OWL in an application on mobile devices. Currently I'm targeting android, but I'd also be interested in the possibilities on the iPhone and on J2ME.
I would like to use a library instead of implementing everything from scratch.
I know that there are some libraries/frameworks like Jena, Redland, Protégé but they don't state on which platforms they are known to work.
Having a dynamic object model and parsing from and to XML are must-haves for me.
I'd also like to use reasoning, but I've been told it was rather computing-intensive, so that's only a nice-to-have.
For all platforms mentioned, the question can be interpreted as
Is it possible in theory? (especially for J2ME I'm not sure)
Are there libraries that are known to work on those platforms?
Is the performance on a mobile platform good enough for real world usage?
You wrote you want J2ME, but other readers might be interested in C#.
Mono makes C# available on iPhone and Android. Once that is done, you can use ROWLEX to deal with RDF and OWL. You might consider reading this Stackoverflow question.
Maybe look into IYOUIT. It is a
mobile application developed in
Python, and running on Nokia Series 60
phones.
It uses OWL and reasoning. You can read the details in this paper.
There's Jena port to Android platform here.
http://code.google.com/p/androjena/
If this is a client-server type application and you have some control over the server, I would do the semantic web stuff server-side, and hand the relevant information to your view client on the mobile device.
A more general answer to your question title is Mosembro, a browser for Android that utilizes Microformats for semantic data. It doesn't do any non-trivial computations with the data, however.
if you have no limit as to use a certain framework, you can use REST api to handle server-client interactions.
more information here.