IBM Watson Alchemy alternative? - ibm-cloud

I've been playing around with the IBM Watson Alchemy demo and was wondering how you get it to extract a full word from a string. For example if a user were to type in "iPhone 7"... Alchemy would only pick up the "iPhone" from the string. Is it possible to get it to pick up "iPhone 7"? Or is there an alternative to Alchemy that would help to do this?
The Demo: https://alchemy-language-demo.mybluemix.net/

Watson Knowledge Studio allows for custom entity and relationship extraction in AlchemyLanguage if it's in your price range.
Targeted Emotion and Targeted Sentiment also allow you to search for specific targets in the text, so you could search for phrases like "iPhone 7" to get sentiment/emotion information. If all you are doing is checking if "iPhone 7" is in the text, these should do the trick.

A "quick fix" solution would be to build a very simple function that runs every time you detect the "iPhone" entity and looks to see if there is a digit following it.

Related

MRTK TextToSpeech.SpeakSsml doesn't work when using <voice /> element. Device: HoloLens2

I am using unity + MRTK to develop an application for HoloLens 2. I am trying to use "speech styles" for MRTK TextToSpeech.SpeakSsml method (MRTK API Reference). Text to speech works; however, I am unable to employ speech styles.
Example ssml:
<speak version=""1.0"" xmlns=""http://www.w3.org/2001/10/synthesis"" xmlns:mstts=""https://www.w3.org/2001/mstts"" xml:lang=""en-US"">
<mstts:express-as style=""cheerful"">
Cheerful hello!
</mstts:express-as>
<break time=""1s"" />
<mstts:express-as style=""angry"">
Angry goodbye!
</mstts:express-as>
</speak>
My guess is that the default voice does not support speech styles. But, if I add a voice element to use another voice (there are four available voices listed in the documentation), TextToSpeech won't work at all. So, I am facing two problems:
When using the SpeakSsml method instead of StartSpeaking, the selected voice (TextToSpeech.Voice) is disregarded and I am unable to change it using the voice element.
I couldn't find documentation for supported SSML elements for available voices in MRTK TextToSpeech Class.
Any ideas or useful links?
Thank you!
The TextToSpeech provided by MRTK depends on Windows 10 SpeechSynthesizer class, so it works offline and does not support adjust speaking styles. And the mstts:express-as element is only available in the Azure Speech Service, for more information please refer to this documentation: Improve synthesis with Speech Synthesis Markup Language (SSML)

How to tag sub-entity in an intent using IBM Watson?

An answer provide by #Simon O'Doherty in a post:How to capture the multiple values of one entity in IBM watson assistant after asking slot?
said that we can tag entity in an intent, however, in my watson, I do not see this function. Is it be deleted?or it's a version-limited function?
If not, do anyone know how to open the function?
My waston do not show any tag option when I try to edit an intent:
I ran into this, too, for my German language. The support for tagging contextual entities is coming slowly to non-English languages. There is an IBM Watson Assistant doc page showing features and supported languages. Japanese and German are not supported (yet?) for the tagging you asked for.

IBM Watson Assistant: How to assign the correct out of two #sys-date

In Watson Assistant, I am using a #sys-date to fill a slot. I am using Spanish language. When I try to use the sentence "El próximo Sábado", that is "next Saturday", it recognizes two dates (#sys-date:2019-10-12
#sys-date:2019-10-19) and it saves#sys-date:2019-10-12 in the slot, what is a mistake.
How can I manage it?
Thanks

How to show a chart or visualization on Alexa Echo Show?

There are several Alexa skills that include charts - CNBC's Alexa skill is even highlighting the fact that their integration with the Alexa Presentation Language (APL) allows users to view charts:
Now with APL integration, the CNBC skill can do more on your favorite Alexa devices. Visualize market movements with charts, see a market snapshot, watch the latest videos from CNBC, and more!
Yet I can find no documentation or code on Github on how to create such visualizations using the APL. Is CNBC using a beta feature of the APL that is not publicly available at this time?
Yes, as of now, APL is still in beta; and yes, skills with charts means that they are using the beta feature. However, beta is publicly available.
If you wish to build a skill with APL, you need to turn on the Alexa Presentation Language and Display Interface options in Interfaces in the Custom section for your skill in the Amazon Developer Console.
Secondly, APL supports only a set of components, at least for now. Of them, Image is one, which is the equivalent of HTML's img tag. Any visualisation item on the screen—graphs, charts, etc.—can only be and are therefore inherently Image. If you observe, such charts are not interactive, or if they are, then they would be wrapped in a TouchWrapper (onClick)—leading to another intent. So, they would have a routine batch converting charts into images.
As for building skills with APL, you have two options: one, you may use Alexa Developer Console's APL builder tool, which is also in beta. To access it, click on Display in the Custom pane. Once built, you can copy the UI's JSON into your source code; two, you can write the UI components directly in your source code pursuant to APL requirements. You may also build your own parser, if you're feeling adventurous.

Do keywords affect Bluemix Watson speech recognition?

Watson's speech recognizer supports a list of keywords as a parameter, but I'm trying to figure out whether these keywords actually affect recognition. For example, if you were handing Watson an audio clip you knew to contain proper names that might not be properly recognized, would submitting these names as keywords increase the likelihood that Watson would properly recognize them? Do the keywords interact with the recognition itself?
unfortunately the answer is no, the words wont get added to the vocabulary just because you added them as keywords, so they wont be found.
Dani
Looks like the answer to this is no (though I'd love to be proven wrong here): I ran a test clip that contained proper names that Watson mis-recognized, then submitted the same clip with those names as keywords, and it didn't change the transcript at all.
Not entirely conclusive, but it seems like the answer is no.