Setting context variable in Watson Assistant from an entity that uses pattern - ibm-cloud

I am trying to build a chatbot that implemnts a simple calculator, using Watson assistant. It needs to recognize simple math expressions , e.g "1+2". I set up an intent called #simple_math that recognizes phrases like "what is 1+1", which works fine.
I then created an entity called #mathexp that uses a pattern (regexp) to identify the expression part:
Which also seems to fire correctly when a simple expression is detected. But when I go to the node where I try to assign the value identified using #mathexp.literal, it never gets set. I go into the node called "math", the assistant recognized #simple_math and #mathexp, but the context variable $Mexp (which I am setting to "<? #mathexp.literal ?>") is always blank
What am I missing

Related

Actions Builder checked list value for intent parameter returns no original value for match

I'm running into an issue with the Actions Builder that I think is a bug at Google's side of the Actions Builder platform, but not I'm not a hundred percent sure.
I'm building some sort of a shopping list app (that integrates with backend services). I have an intent to add one or more "generic names" (= product names) to a list.
When I configure the Intent with an InputType like this (notice the List being false),
The input I send to the intent gets parsed correctly, like this:
You see, "kokosolie" is resolved when my query was "add 'kokosnootolie' to the list". In this case, I get both the resolved and the original value of the InputType.
But in my use case, I want to receive multiple values for a "Generic name", so I turn on the "is list" checkbox.
The same query/voice command now results in this:
Only the resolved input type is returned here. While I need this one later in the app's logic, I also need the original values, but Google doesn't return them to me.
Is this a bug or a missing feature at Google's side of things or rather me, missing some configuration?

Using Regex to identify Entity on a Google Action Intent

I have this Intent on Google Actions with a couple of utterances:
and I'm using one of the default system types:
The Bank Account should always be 8 digits so I was thinking if I could use Regex on Google Actions to identify this exact entity when typed by the user.
If yes, how exactly?
Can I just create an utterance with Regex like this: \d{8}
Should I "highlight" as Parameter just like I did with the two given examples as well?
Thanks,
While this is not visible in the Actions Console, it is something that can be done if you download the project to a local environment using gactions.
You can create a new Type in under custom/types. You will use create RegularExpression Entities.
regularExpression:
entities:
# `bankNumber` is your parameter name. It can be custom.
bankNumber:
regularExpressions:
- \d{8} # In the `re2` syntax
Then you'll need to re-upload your project to the Actions Console with gactions push and gactions deploy preview.

IBM Watson Assistant: How to set a 'jump to' target node dynamically (i.e. using context variables)

I want to jump from a dialog node to a node the ID of which is stored in a context variable.
I'm trying to solve a problem that has to do with a digression and which has been described here:
Conditionally return from digression in watson assistant
Especially this chart visualizes the problem:
In my opinion, A.H. posed a very reasonable and relevant question that has got no viable answer.
As far as I can see the problem can not be solved by digression settings. Either the root level node (triggered by matching the intent # Want_to_speak_to_someone) is set to 'return after digression' or it is not.
If the digression setting of this digression node is set to 'return' it will always return - no matter what happens further down in the dialog flow of this digression. Even if the user confirms that he wants to speak to a person (i.e. he does not want to return) the dialog will return to the node where the digression started.
This even happens when I jump from the yes-node (user confirms that he wants to speak to a person) out to any other node. As soon as the branch (or the branch the user jumped to) ends the dialog returns to the node where digression started.
If the digression setting of this digression node is set to 'does not return' however, a return is not possible - even if the user decides against speaking to a person and opts for returning to where he was.
What A.H. and I want is that the user can digress from a dialog flow and can still decide whether he wants to return or not. I think this is a pretty natural and important feature of a dialog. People like to reverse their decision or maybe they even digressed unintentionally from the given dialog flow.
Akaykay proposed to have two different nodes - a 'yes-node' which allows returning and a 'no-node' that does not allow returning. But this doesn't work, because before that I must have another node that asks the user for confirmation - and this 'confirmation-node' has to be set either to 'return' or 'does not return' (yielding the problems described above).
For this reason, I tried to figure out a workaround: I store the dialog node ID from which the dialog digresses in a context variable.
It is a context variable
"context": {
"last_node": "<? output.nodes_visited [0]?>",
...
},
which gets updated in every node of a dialog flow which allows digression.
In the example I could then jump back to the $last_node if the user wants to return and I could jump to another (fixed) node if the user wants to speak to a person - digressions settings of the 'digression node' would not interfere and could be set to 'does not return'.
Then I tried to edit the respective node (from which to return to the $last_node) in the json file of my skill:
"next_step": {
"behavior": "jump_to",
"selector": "user_input",
"dialog_node": "$last_node"
},
But when I reimport the skill-json-file again I get this error message:
I would be fine with either solution - one that uses digression settings or one that allows setting the 'jump to' target node dynamically. I deeply appriciate any help - thanks!
If you want to prevent Watson Assistent from returning from digression just call <? clearDialogStack() ?> function in the node where you don't want Watson Assistant to return from digression and that's it.
In your chart you would write "Ok, click here. <?clearDialogStack()?>" in the output text of the node "Ok, click here." and that should do the trick.
Here it is in the doc in this section: https://cloud.ibm.com/docs/services/assistant?topic=assistant-dialog-runtime#dialog-runtime-digressions
Also note that currently it is not possible to create dynamic gotos with Watson Assistant. Only thing you can do is to create a dialog node with all the needed gotos conditioned by something underneath it and then you would goto this node. It is hard to create this manually but it can be generated automatically. For more magic with WA check out this project:
https://github.com/IBM/watson-assistant-workbench
It is possible to develop chatbots with WA completely without UI.

IBM Chatbot Assistant - Array with same values

I have this piece of code in JSON editor of Watson:
"context": {
"array": "<? entities['spare_part'].![literal] ?>",
"array_size": "<?$array.size() ?>"
When the input of the user, for example, is "Hello, I need a valve, and the part number of the valve is 1234", the size of the array ends up being 2 since the user mentions the word "valve" twice in his input. Some nodes are being executed depending on size of the array. For instance if the size of the array is 1, some nodes will be ignored because they are only executed only if the size of the array is 2.
I want the array to store only the inputs with different values, which is basically I don`t want the array to store the values of the same type, in my case 2 valves. If it is possible somehow please show me a way.
All that can be done, but the best approach depends on the overall system architecture. Remember that Watson Assistant is a conversation service, not a data processing platform...
You can process JSON in either Watson Assistant directly using the built-in methods and SpEL, see these links to get started:
- https://console.bluemix.net/docs/services/conversation/expression-language.html#expressions-for-accessing-objects
- http://docs.spring.io/spring/docs/current/spring-framework-reference/html/expressions.html
- https://console.bluemix.net/docs/services/conversation/dialog-methods.html#expression-language-methods
That would require some coding within the dialog nodes. It could be ok. What I would recommend is to either process in your app that drives the dialog (you need that app anyway) or to code up small server actions to transform data.
If it is the word you are looking for, you can use contextual entities to train for this.
As an example I created the following intent (along with general intents from catalog).
For each example I highlighted the "valve" word that is the one I am interested in, and added to the entity.
Now when I test it I get the following.
All this was done in a couple of minutes. For production level you may want to add more examples, or think about how you want to annotate.

Undefined parameter in Google Action

I have a DialogFlow agent I am trying to test on Google Assistant. I've created a relatively simple Intent called "Set name" with the following Training phrases:
My name is Ryan.
Bill
I'm Steve
The name's Bond. James Bond.
It has two parameters:
Required: given-name with the Entity #sys.given-name and the value stored as $given-name
last-name with the Entity #sys.last-name and the value $last-name
I'm able to test in just fine in the DialogFlow test console. But when I try to "See how it works in Google Assistant." I get the following error:
Request contains an invalid argument. The query pattern 'The name's
Bond. $SchemaOrg_Person:given-name $SchemaOrg_Person:last-name.'
contains an undefined parameter 'last-name.'
If I delete the "James Bond" training phrase, it works okay. But I would like to include that. What am I doing wrong?
Here is a screenshot of the intent that is causing the problem:
Here is the link I'm clicking to try in Google Assistant:
And finally, here is the error message that appears in the bottom-right corner of the screen when I click that link:
I suddenly got a few of these error messages when both clicking the "See how it works in Google Assistant" link and submitting the app for production.
It seems like characters like apostrophes and hyphens in the training phrases creates trouble and can give that error message.
In addition it complained about a variant of the training phrases that I could not find anywhere no matter how much I looked at all languages, all pages of the phrases and all intents. I finally found the phrase in question by exporting the project and searching through the JSON files. Then I could delete the phrase locally, delete the intent in Dialogflow and do an import back to Dialogflow. (From my understanding it had messed up a follow-up intent which it also in the JSON (nowhere in the UI) had attached parts of some training phrases.)
Try to remove the dot from the sentence. So it will be:
"The name's Bond. James Bond"
I ran into same error and finally it was found to be an issue with the additional language I have added.
There was default "en" language and "en-IN" added by me. The issue was with training phrases in "en-IN" language. I didn't need it so removed it and it worked fine.
So, do check how many languages are enabled in your agent and whether training phrases are set properly for them or not.