Building a language translation tool for smart phones - iphone

I am implementing a language conversion tool, to convert Spanish to English, on a variety of smart phones: Android, BlackBerry, iPhone, Windows Phone 7.
How do I implement language conversions? Searching, I am not finding a tutorial about this, and I don't having any experience with this.

Language translation is hard to get right so you may want to lean on an existing language translation service. Services such as Google translate offer APIs and you could try calling into from your mobile application if it meets your requirements. Language conversion is a sub-field of computational linguistics and it ins't a solved problem though there are techniques you can read about.
Blog entry about Googles translation API http://googlesystem.blogspot.com/2008/03/google-launched-another-ajax-api-this.html

Related

Are real time cross-platform applications between html and android possible

I'm currently investigating the scope of my project and have come across an issue with regards to the platform on which it can operate. The initial goal is to create a cross platform game across html, andriod and ios.
Is this type of application possible? It is important to note that it would require real time(low latency and consistent) interaction between the three platforms.
If so what are some tools I should take advantage of while developing.
We are doing this exact sort of thing using the 3rd party asset within the Unity UI's:
https://www.assetstore.unity3d.com/en/#/content/10872
and custom Socket.IO (http://socket.io/) server implementation. Works like a champ and is totally agnostic wether the client is Unity3D or just a browser.

Where can I find the Miracast specification?

I want to develop a Miracast application for Mac OS X. (i.e. something to display imagery to a miracast-enabled device) The only problem I'm having right now is that I can't find the official specification for this.
Is it possible that you need to be a member of the wi-fi alliance to get this specification? Is this even an open standard?
Or better: Is there a (open-source) miracast library I can use?
Thanks!
Have you seen this? http://www.freedesktop.org/wiki/Software/openwfd/
As for the wi-fi alliance, you don't need to be a member but it will cost you $199: https://www.wi-fi.org/wi-fi-display-technical-specification-v11
The Wifi Display spec is currently free (as in 0.0$). The download still requires agreeing to a license agreement and does not seem free to redistribute.
Also, WDS is a new but fairly complete implementation for linux and should be easy to port to other platforms as it tries very hard to keep agnostic with respect to the stacks used to handle media playback and Wi-Fi Direct. That said, the most difficult bit in Miracast seems to be Wi-Fi Direct so if your platform does not support that well, you're pretty much out of luck...
Disclaimer: I used to work on the WDS project.
As #Constantinos said you will have to pay 200$ for getting the specifications via wi-fi alliance.
Or, as you ask, you can look at the following implementation available on the internet:
Java
or C
I think there is enough example here to do what you want.

Large vocabulary speech recognition in iPhone without internet?

I used Openears which needs dictionary. It is usefull when we mention the word in dictionary. I wanted to convert all words we speak. So I used Nuance’s speech to recognition dragaon SDK. But it communicates with webserver. I want to avoid server communication because of security concerns. Is it possible to convert speech to text for all words we speak as it is in windows mobile without communicating server only in offline mode?
Speech recognition with unlimited vocabulary requires very big computational and memory resources (gigabytes of memory) and thus it's very hard to do that in iPhone on other embedded device. iPhone is 9 times slower than desktop. iPad is easier since it has more powerful CPU.
Google has put very big effort to make their engine work offline for dictation, and still it prefers to send data to the server because it is significantly more accurate.
Because of that most of the solutions running on small devices use limited vocabulary. Though this vocabulary can be large enough so you will not notice that. Usually 500-1000 words is enough to cover most practical situations. You can use OpenEars to recognize such vocabulary.
To train a language model you need texts from your domain (words and expressions). Language model training is described in CMUSphinx tutorial. To use language model you can use the following OpenEars API call:
- (void) changeLanguageModelToFile: (NSString *) languageModelPathAsString
withDictionary: (NSString *) dictionaryPathAsString
See API reference for more details.
You can use OpenEars with such vocabulary and corresponding language model to support free form text entry for your device.
It could be done, but if you are looking for an unlimited vocabulary speech to text convertor, then it is best if the computations are done on a server. The requirements for such a system are probably too great for a system such as a smartphone. The main areas where you will have huge requirements are as follows:
Dictionary to map input speech into text.
Computations for speech recognition algorithms to run.
I believe this is the reason why companies like Google run their speech recognition services over a server and not on the phone.
But if the application was a limited word speech to text, then it might be worth giving it a try.
All the best!
Doesn't pocketsphinx work on iPhone without network connectivity? Aren't there some demo apps floating around like VocalKit
http://www.rajeevan.co.uk/pocketsphinx_in_iphone/ may be helpful.

How to implement language translator facility in an iphone application?

How to implement language translator facility in an iphone application ?
I have found that for Online mode this works, using GOOGLE API :(for example)
http://ajax.googleapis.com/ajax/services/language/translate?q=nature&v=1.0&langpair=en%7Cja
But how to perform language translation in offline mode ? Any open source API available.
Is it allowed to use google translate API in iphone application ??
(Languages : All languages which are supported by iPhone)
Natural language translation is an incredibly hard problem. Given the limited storage and computational resources available to a mobile device, I'd say that your best bet would be to leverage an existing online service, like Google's translation engine.
Otherwise, you could check out the libraries mentioned in this question: "Your favorite natural language parser?" to parse the language. Translation after that point will be another challenge. The answers to this question point out one or two resources for Python, but that's not really compatible with the iPhone.

Semantic stuff (RDF, OWL) on mobile phones - is it possible?

I'm thinking about using semantic (web) technologies like RDF and OWL in an application on mobile devices. Currently I'm targeting android, but I'd also be interested in the possibilities on the iPhone and on J2ME.
I would like to use a library instead of implementing everything from scratch.
I know that there are some libraries/frameworks like Jena, Redland, Protégé but they don't state on which platforms they are known to work.
Having a dynamic object model and parsing from and to XML are must-haves for me.
I'd also like to use reasoning, but I've been told it was rather computing-intensive, so that's only a nice-to-have.
For all platforms mentioned, the question can be interpreted as
Is it possible in theory? (especially for J2ME I'm not sure)
Are there libraries that are known to work on those platforms?
Is the performance on a mobile platform good enough for real world usage?
You wrote you want J2ME, but other readers might be interested in C#.
Mono makes C# available on iPhone and Android. Once that is done, you can use ROWLEX to deal with RDF and OWL. You might consider reading this Stackoverflow question.
Maybe look into IYOUIT. It is a
mobile application developed in
Python, and running on Nokia Series 60
phones.
It uses OWL and reasoning. You can read the details in this paper.
There's Jena port to Android platform here.
http://code.google.com/p/androjena/
If this is a client-server type application and you have some control over the server, I would do the semantic web stuff server-side, and hand the relevant information to your view client on the mobile device.
A more general answer to your question title is Mosembro, a browser for Android that utilizes Microformats for semantic data. It doesn't do any non-trivial computations with the data, however.
if you have no limit as to use a certain framework, you can use REST api to handle server-client interactions.
more information here.