Does IBM Bluemix Watson_developer_cloud API supports Japanese to English translation?
I am trying to convert Japanese sentence to English using python sample example code provided in watson_developer_cloud examples but I am getting "?????????" as output. There are three domains - conversational, news & patent. Even in Language Translation demo for IBM Bluemix it only shows support for news.
text = ('は自動注入ですので').encode('utf-8')
print(json.dumps(language_translator.translate(text, source='ja',target='en'), indent=2,ensure_ascii=False))
If I am using:
print(json.dumps(language_translator.translate(text,model_id='ja-en-conversational'),indent=2))
I am getting model_id not found error. Please help!
Based on the documentation it seems that only News is supported at this time:
Release Notes mention the addition of English to/from Japanese for News category.
Japanese is not yet listed in the fully supported languages.
I suggest asking in the Watson Developer slack where the development team is around.
Related
An answer provide by #Simon O'Doherty in a post:How to capture the multiple values of one entity in IBM watson assistant after asking slot?
said that we can tag entity in an intent, however, in my watson, I do not see this function. Is it be deleted?or it's a version-limited function?
If not, do anyone know how to open the function?
My waston do not show any tag option when I try to edit an intent:
I ran into this, too, for my German language. The support for tagging contextual entities is coming slowly to non-English languages. There is an IBM Watson Assistant doc page showing features and supported languages. Japanese and German are not supported (yet?) for the tagging you asked for.
does Watson NLU support Hebrew, including entities, sentiments? couldnt find it in the documentation
thanks very much
Lior
Lior, according to the documentation not. Not yet.
But, I can suggest to you use one Language Translator for using this API with Hebrew and translate to English for using each feature(English has more features available).
In the official documentation has one list of the Supported languages.
The following table shows the supported languages for each feature. Natural Language Understanding automatically detects the language of your source text by default. You can override automatic language detection if you want to specify the language manually.
Supported languages:
See the Official reference here.
You can see this article about other's API's and Supported languages. [Last Update: 24th of April 2017]
No it doesn't. It is very clear in the documentation.You can find it here.
https://console.bluemix.net/docs/services/natural-language-understanding/language-support.html#language-support
At the time of this post, the compatible languages are:
Arabic
Dutch
English
French
German
Italian
Japanese
Korean
Portuguese
Russian
Spanish
Swedish
Currently, English is the only language that supports all functions provided by Watson NLU. The rest have limited support.
I would like to use the Bluemix Conversation sample application
and add speech input and output to it. There are other sample applications for Bluemix TTS and STT available.
What are options to integrate these 3 functions and which of them are recommended for beginners?
There are no immediate plans to provide a 'simple' sample app which demonstrates combining Watson STT (Speech to Text), Conversation, and TTS (Text to Speech). Longer term it is definitely on the radar.
In the immediate term, to get an idea as to how to do this, please take a look at the car-dashboard app code:
https://github.com/watson-developer-cloud/car-dashboard/blob/master/ui/index.html#L85
https://github.com/watson-developer-cloud/car-dashboard/tree/master/ui/ibm
https://github.com/watson-developer-cloud/car-dashboard/tree/master/speech
https://github.com/watson-developer-cloud/car-dashboard/blob/master/ui/ibm/stream_speech_to_text.js#L34
The car dashboard app uses the IBM Watson Speech JS SDK:
https://github.com/watson-developer-cloud/speech-javascript-sdk
Hopefully this helps.
This is an old question, but IBM Watson is still evolving so this may be a more up-to-date answer.
You have 2 options.
You can simply have your app submit an HTTP REST request (either GET or POST) by following this tutorial
Or you can leverage a language-specific SDK.
If you're using nodejs, then check out this example.
For java, see this example.
Edit
Here's an example git project I created to integrate text-to-speech to the conversation-simple sample app: conversation-simple-with-text-to-speech
Here's the specific commit where the integration was added: commit 3564aeb
I did something along these lines with the Dialog service demo app and the Speech JS SDK a few months ago:
http://speech-dialog.mybluemix.net/
Full code is on github but almost all of the changes were in this commit.
Note that it was built on an older beta of the SDK. You can get the latest release from github releases or npm (for use with webpack/browserify/etc.) and there are lots of examples.
I'm using AlchemyAPI's text extraction API via Bluemix to get the text of webpages for an app I built. More specifically, the URLGetText call. Customers are complaining that various webpages are not supported due to their text language and are getting the error unsupported-text-language. However, the possible error codes listed are
invalid-api-key
cannot-retrieve
page-is-not-html
Is there any way on my end to fix this and allow for more languages? Thanks!
I've recently started studying OpenEars speech recognition and it's great! But I also need to support speech recognition and dictation in other languages such as Russian, French and German.I've found that here are available various acoustic and language models.
But I cannot really understand - is that enough what I need to integrate extra language support in application?
Question is - what steps should I take in order to successfully integrate, for example russian, in Open Ears?
As far as I understood - all acoustic and language models for english language in Open Ears demo is located in folder hub4wsj_sc_8k . Same files can be found in voxforge language archives. So I just replaced them in demo. One thing is different - in demo English language, there also was a sendump 2MB large file, which is not located in voxforge language archives.There are two other files used in Open Ears demo:
OpenEars1.languagemodel
OpenEars1.dic
These I replaced with:
msu_ru_nsh.lm.dmp
msu_ru_nsh.dic
as .dmp is similar to .languagemodel. But application is crashing without any error.
What am I doing wrong? Thank You.
From my comments, reposted as an answer:
[....] Step 1 for issues like this is to turn on OpenEarsLogging and verbosePocketsphinx, which will give you very fine-grained info on what is going wrong (search your console output for the words error and warning to save time). Instructions on doing this can be found in the docs. Feel free to bring questions to the OpenEars forums [....]: http://politepix.com/forums/openears You might also want to check out this thread: http://politepix.com/forums/topic/other-languages
The solution:
To follow up for later readers, after turning on logging we got this working by using the mixture_weights file as a substitute for sendump and by making sure that the phonetic dictionary used the phonemes that were present in the acoustic model rather than the English-language phonemes.
The full discussion in which we accomplished this troubleshooting can be read here: http://www.politepix.com/forums/topic/using-russian-acoustic-model/
UPDATE: Since OpenEars 1.5 was released this week, it is possible to pass the path to any acoustic model as an argument to the main listening method, and there is a much more standardized method for packaging and referencing any acoustic model so you can have many acoustic models in the same app. The info in this forum post supersedes the info in the discussion I linked to in this answer: http://www.politepix.com/forums/topic/creating-an-acoustic-model-bundle-for-openears-1-5-and-up/ I left the rest of the answer for historical reasons and because there may be details in that discussion that are still useful, but it can be skipped in favor of the new link.