Create a Watson Assistant chatbot for websites and Facebook Messenger - ibm-cloud

Designing a Watson Assistant chatbot for a website and Facebook Messenger raises some issues due to the different way these channels format the Watson response. I am trying to understand how to deal with this.
Currently, a Watson Assistant dialog node allows responses that include text, image, pause, and an option. So far so good. The problem is that in text response I need:
add some empty line. In HTML I can use <br/> and it works fine on Website but not on Facebook.
add a link. In HTML I use link while Facebook Messenger render directly the single URL
I need to list an unordered or ordered list. In HTML I can use ol/li or ul/li tags in Facebook messenger no.
carriage return. In HTML I can use <br/> in Facebook see 1.
How do I deal with these incompatibilities?
I expect to have somewhere best practices documented to write a multi-channel chatbot but I haven't found them.

When building a chatbot with IBM Watson Assistant that has to face different output channels (in your case website and Facebook Messenger), I see two options:
Limit the responses to the common output features. Watson Assistant supports rich responses with multi-line support. Use that instead of <br/>. Check with the integration-specific docs, here Facebook Messenger integration, what is supported.
Use two bots, one for the website and one for Facebook Messenger. In that case you could use the native response format supported by Watson Assistant. The downside is that you have to maintain two bots.
(not an option from your description) Add a wrapper around Watson Assistant and translate your generic responses to your desired output channel with optimized formatting. It would require more effort, but has the best output.
As a common format, consider some basic Markdown as supported by Watson Assistan and some output channels.

Related

Facebook Channel support (Beta) doesn't allow quick replies, buttons, media, etc

Does anybody know if there's any way of using standard Facebook Messenger features like quick replies, buttons or templates from Facebook Channel (Beta) via Programmable API?
As it is right now it seems too limited to be of any use beyond simple text conversations; no prefilled answers, no links to actions or products...
Are there any (short term) plans to support it? (just being able to send a json like in Facebook own API would be more than enough https://developers.facebook.com/docs/messenger-platform/reference/buttons/quick-replies)
Right now, you are right that the integration is relatively simple.
What you want to look out for is the Twilio Messaging Content API which is currently in pilot. The Content API is intended to make rich messaging across any of the channels that Twilio offers easier. The Content API will wrap each of the channels, making it straightforward to add buttons, actions or prefills to messages over channels that support it, with fallbacks for simpler channels (like our old friend SMS). The API is in pilot right now, but you can register your interest and request access here.

Watson Assistant to catch the elements in the webpage

I have a Watson Assistant chatbot embedded in a webpage using the Web Chat embedding integration JavaScript.
I need to make the embedded chatbot catch the login information from the webpage the chatbot is embedded in. Like, making the chatbot starts with "Hi Adam" based on that Adam is already logged into the webpage. Or, even how to make the chatbot catch different elements in the page it's embedded into.
Any advice is appreciated..
Thanks..
You will need to extend the IBM Web Chat client, to capture the login info, and then pass this info back into the Watson Assistant payload. I would suggest adding the info to the context, so that it can easily be used in your answers as you have commented about. This will mean extra JS coding. There is more info in the Watson doc's about extending the Web chat bot or on the IBM Github.

IBM Watson Assistant: Multi-workspace for Facebook page?

We have integrated a Watson Assistant skill/workspace with Facebook page using Watson-provided integration approach from virtual Assistants tab.
We are able to get the response from single skill/workspace. Now we want to add another skill/workspace to the integration, but we are not able add it.
Please let us know how can we enable multiworkspace approach for FB integration using watson provided integrations.
At this time, you can only have one skill per assistant. You can swap existing skills using the tool.
If you are using the Watson Assistant API (V1 only) from an application, then you have access to multiple workspaces / skills. See the Botkit Middleware for Watson Assistant for an example of dynamically switching workspaces. It is based on the Watson SDKs.

Developing a "Flash Briefing" on Google Home

I publish a flash briefing skill on Amazon's Alexa. It is a brief news update on a specific topic. I provide the information to Alexa via a json file that is updated every 10 minutes.
I'd like to publish something similar on Google Home devices. However, when I look at DialogFlow, that API appears to be conversational-based. Is that the right API for this type of app? Is there a Template for flash-briefing-like apps (i.e., easy to launch apps that don't require any additional user input after launching)?
No, you don't need a conversational Action for what you're doing.
Depending on the specifics of how you're providing the content, you may wish to look at either Podcast Actions or News Actions. These methods document what Google is looking for to make structured content available to the Google Assistant.

What is the simplest way to create a custom google assistant action that returns a piece of date from a web page

I want to develop a custom google assistant action that will get a web page, extract a piece of information, and read it aloud.
I'm looking for pointers for relevant sources.
If it's possible through services such as ifttt it's even better (though from what I saw the google assistant support in ifttt doesn't appear to support this scenario).
You're looking to create an Action with the Actions-on-Google API. The easiest way would probably be to create a Firebase Function with node.js that handles the AoG request from Google, makes the call to the web page, extracts the information you need in a form that can be read aloud, and sends that back.