Watson: Dialogs. Are they required? - ibm-cloud

We are working on a micro-service that interacts with Watson.
I was presented with the following argument: "There is no need to use dialogs in a Conversation project on Watson. Declaring the intents and entities is just enough to get the work done"
Based on the documentation, I have the impression that using dialogs is a requirement in order to train Watson correctly on how to interpret the combination of intents and entities. Plus, in the Dialog section, you have the chat that allows you to make corrections.
Is there a way that I can confirm that Dialogs are or are not a requirement?

If you plan to just use intents and entities programmatically then you don’t need dialog.
You will need to create one blank dialog, with a condition of true. This is to avoid SPEL errors relating to no node found.
From a programming point of view (and ignoring conversation for a minute), if you need to take action on intents, entities, or change context variables. The recommendation is to do that in dialog. This way your code is not split across two systems, making it easier to maintain.

Probably in this phrase above, the author wants to say that you only need to create #intents and #entities for your Conversation and defining the purpose for your bot, this is true, depends on what you want to do in your bot, cause after it you can just create your dialog flow!
The Dialog section is for you create your Dialog flow, is absolutely needed when you want to create one conversation flow, e.g: one chatbot.
A workspace contains the following types of artifacts:
Intents: An intent represents the purpose of a user's input, such as a
question about business locations or a bill payment. You define an
intent for each type of user request you want your application to
support. In the tool, the name of an intent is always prefixed with
the # character. To train the workspace to recognize your intents, you
supply lots of examples of user input and indicate which intents they
map to.
Entities; An entity represents a term or object that is relevant to
your intents and that provides a specific context for an intent. For
example, an entity might represent a city where the user wants to find
a business location, or the amount of a bill payment. In the tool, the
name of an entity is always prefixed with the # character. To train
the workspace to recognize your entities, you list the possible values
for each entity and synonyms that users might enter.
Dialog: A dialog is a branching conversation flow that defines how
your application responds when it recognizes the defined intents and
entities. You use the dialog builder in the tool to create
conversations with users, providing responses based on the intents and
entities that you recognize in their input.
EDIT:
Likes #Simon O'Doherty said, if your purpose is to just use the Intents and Entities programmatically, then you don't need the Dialog. His answer is complete.
See the documentation for Building a Dialog.

Yes Intents and Entities may seem to be just enough for you, and you can generate the answer - programmatically - based on them. But you should keep in mind that Dialog does not mean Response. I know it's hard to find this clearly in Watson documentation.
If you need to get access to context variables in the next node or dialog, you should define them in slots. Without defining dialogs for each intent, context variables will not be accessible or conveyed in the next dialog.

You need Dialog portion of Watson Conversation service if you want to respond to the user queries.
The Intents and entities are the understanding piece, and the Dialog portion is the response side of the conversation.

Related

In general, would it be redundant to have two GET routes for users (one for ID and one for username)?

I'm building a CRUD for users in my rest API, and currently my GET route looks like this:
get("/api/users/:id")
But this just occured to me: what if a users tries to search for other users via their username?
So I thought about implementing another route, like so:
get("api/users/username/:id")
But this just looks a bit reduntant to me. Even more so if ever my app should allow searching for actual names as well. Would I then need 3 routes?
So in this wonderful community, are there any experienced web developers that could tell me how they would handle having to search for a user via their username?
Obs: if you need more details, just comment about it and I'll promptly update my question 🙃
how they would handle having to search for a user via their username?
How would you support this on a web site?
You would probably have a form; that form would have an input control that would allow the user to provide a user name. When the user submit the form, the browser would copy the form input controls into an application/x-www-form-urlencoded document (as described by the HTTP standard), then substitute that document as the query_part of the form action, and submit the query.
So the resulting request would perhaps look like
GET /api/users?username=GuiMendel HTTP/x.y
You could, of course, have as many different forms as you like, with different combinations of input controls. Some of those forms might share actions, but not necessarily.
so I could just have my controller for GET "/api/users" redirect to an action based on the inputs?
REST doesn't care about "controllers" -- that's an implementation detail; the whole point is that the client doesn't need to know how the server produces a representation of the resource, we just need to know how to ask for it (via the "uniform interface").
Your routing framework might care a great deal, but again that's just another implementation detail hiding behind the facade.
for example, there were no inputs, it would return all users (index), but with the input you suggested, it would filter out only users whose usernames matched the input? Did I get it right?
Yup, that's fine.
From the point of view of a REST client
/api/users
/api/users?username=GuiMendel
These identify different resources; the two resources don't have to have any meaningful relationship with each other at all. The machines don't care (human beings do care, so we normally design our identifiers in such a way that at least some human beings have an easy time of it -- for example, we might optimize our identifiers to make things easy when operators are reading the access logs).

#sys-person is depreciated, how can I create a name entity for slots?

IBM Watson Assistant is depreciating the #sys-person entity for some reason. I use it a lot in my slots to capture the name.
How can I create an entity that would do the same thing to replace #sys-person?
If you are using a supported language, use annotation-based entities.
The tutorial on how to create a database-driven Slackbot uses that method for locations (#sys-location is deprecated, too). Load the provided skill and see how it is done.
Basically, you create an entity and then go to intent examples and tag the parts which identify a person. Then, Watson Assistant is learning that you expect a person entity in specific sentencens and sentence positions. You can fine-tune it by running some dialogs and correct the falsely identified or missing person entity values.
I use the same technique as replacement for #sys-location and it works for me in slots. Even "San Francisco" is recognized as one location. I added it as sample. You can tag entities across intents.
If you don't want to go that route, the only solution I am aware of is to define an entity, e.g., my-person with many examples and use that.

Using "speechBiasingHints" with Dialogflow Webhook

First time posting, so feel free to give me feedback if I could improve something about this post... Now on to my question.
I am currently developing a Google Action, the Action will allow the user to define important events, such as Bob's Birthday or Fred's Graduation, and save data about said events. Later, the user will be able to ask for info about the event and get it returned back to them.
I am using the Dialogflow API with "Inline Editor" fulfillment to keep it as simple as possible for right now. The problem I am running into is this, the event has an entity type of #sys.any, so anything the user says is excepted as valid input. I would like some way to bias towards events I already have stored for the user however, so that they are more likely to find the event they are looking for.
I found another answer on here discussing speech biasing (What is meant by speech bias and how to use speechBiasHints in google-actions appResponse) which defined speech biasing as the ability to"influence the speech to text recognition," which is exactly what I believe I want. While that answer provided sample code, it was for the Actions SDK, not the Dialogflow SDK, which I am using.
Can anyone provide an example of how to fill the "speechBiasingHints" section of the ExpectedInput response of the Conversation Webhook using the DialogFlow Webkook?
Note: This is for a student project, and I'm new to developing Google Actions and still very much learning about everything that is capable with Google Actions. Any feedback or suggestions are very welcome.
The question you link to does quite a few things differently than the approach you're taking. The Action SDK provides more low-level control, but doesn't have much Natural Language Processing (NLP) capabilities, which Dialogflow provides.
Dialogflow handles biasing a little differently through the use of Entities, so you don't need to control the speech biasing directly, Dialogflow can handle that for you, to some extent.
Since each user may have a different event name, you'll probably want to use a User Entity, which is an entity you define and then populate on a user-by-user basis through Dialogflow's API. In your sample phrases, you can then use this entity name instead of #sys:any, or create another set of phrases that use this entity in addition.

Ready to use intents & dialogs for Chatbot

I'm using IBM Watson Conversation to build a bot. I'm looking for ready to use dialogs & intents regarding the most commonly used conversation statements.
Like: Welcome, Good Morning, Aha, Good Evening, How are you, Who are you, etc...
Actually when I used api.ai from Google, there's a default WELCOME intent, it's AMAZING. I'm looking for something similar.
In Conversation service, you get the default "welcome" message node and another node for "anything_else" which will be executed when no other intent matches the user query. These two nodes get created for you the moment you go to the Dialogue tab of the service for the first time.
This gives you a skeleton of how you can add new nodes as per your need. Currently there aren't any other intents that Conversation service provides by default, which also makes sense in someway as everyone's need might be different.
But the service provides some default entities which are called "system entities". These are common entities like Person's name, location, currency etc that might be used in almost all sorts of chatbot scenarios. By default, these will be disabled, but you can turn them on from the "Entity" tab and click on system entities. For better designing I recommend you check this documentation

Associating open graph action with multiple objects & caption template

Currently I am building a site with a Facebook Open Graph integration.
One complication I have is since user can do seemingly similar actions to different objects on our site, it is easy for us to define different action for each similar actions. However, it seems like Facebook is not allowing ( or at least not liking ) for one site to have multiple similarly looking actions.
For instance, let's assume that user can both 'buy' a car, and 'buy' an insurance in our site.
Although, on surface, these two action look similar, because their context is different we want to show different content - more specifically different caption - for each action that is postsed.
Simple way to implement this will be defining two actions,
'BuyCar' <---> associated with Car
'BuyInsurance' <---> associated with Insurance
and let them have distinctive caption template.
However, as I mentioned earlier, since Facebook does not allow multiple similar actions to be defined within a site, I should be defining.
'Buy' <----> associated with [Car, Insurance]
where this action always have only one property defined. (either Car or Insurance)
Downside of having these type of action is, due to limitation in current cation's template language (lacks conditional statement), I am not able to produce different caption effectively without knowing which property is set.
How should I be handling this issue?
Your help will be greatly appreciated.
Thanks
I think the captions do need to be something generic that will work for all connected object types. But you could use filters to defined separate aggregations for each object type.
Just add an additional parameter to all of your objects and set the value of that parameter as an aggregation filter?