Slot recognition issue with Deep linking in google assistant action builder - actions-on-google

We created a Google assistant action using action builder. This action has a required slot. The slot is not recognized when the intent is invoked via deep linking. But it works fine when it is invoked in two steps.
Example:
User says: Ask [action name] what is the pressure in [slot value]. -> Google assistant asks for slot value even when the slot value is in the command.
On the other hand, following recognizes the slot value
User: Ask [action name]
GA: Welcome to [action name]. What can I do for you?
User: what is the pressure in [slot value].
In this, the action recognizes the slot value and responds as
"The pressure in [slot value] is ..."
How can we make the invocation via deep linking recognize the slot value?
Thank you!

Related

Thingsboard: Create a "rest api call" button in a dashboard

I am trying to create the following in Thingsboard:
In a dashboard create a button, when it is clicked a rest api call to an external server is made.
So far I have found that it is possible to define a rule chain with "rest api call" node, but I am unable to find a good rule which will lead to its execution (sending an api each time an entity is created obviously is a bad option)
In the "control widgets" I was not able to create a working solution but it looks like the correct way.
I have figured a way to do it, not the best but it works:
Create a board.
Create an "Update Device Attribute" control widget.
Go to "Edit" in the "Update Device Attribute" control widget, pick an unused device (it might be possible to pick a used device but i am not sure if the operation might alter it), go to "Advanced" and set "Device attribute parameters to any valid json, for example: {"rest":1}.
Go to "Rule Chains" and create the following rule:
Input -> Message Type Switch -(Attribute Updated)-> Rest Api Call (choose from nodes-external).
In the "Rest Api Call" set the required endpoint url and method, then apply change.
If you have configured everything properly, every time the button is clicked a rest api call will be made

How to trigger a specific node in IBM Watson Assistant from URL

I have a website which links to a chatbot built on IBM Watson Assistant. There are some hyperlinks on the website that I want to trigger specific nodes/ intents the watson dialog.
Example: User clicks on "Provide feedback" link, the watson chatbot launches and based on the link the "provide_feedback" intent is recognised (thus preventing the user from needing to specify the intent after clicking the link).
Has anyone tried this before?
I also came across this requirement and want to mention another alternative here:
Instead of sending an input text that matches the intent of your desired node, you can also pass
Intents to use when evaluating the user input.doc
and tell the assistant to match it with confidence of 1.0.
I think this is a clean method, because you don't need to deal with disambiguation of your input text.
Then you don't need to send input text at all and the intent actually does not even need example phrases :-)
For example if you want to trigger a node that has the intent #provide_feedback
you can call this python example code:
send_message_to_chatbot(text="", intent="provide_feedback")
def send_message_to_chatbot(text="", intent=""):
message = assistant.message(
assistant_id=ASSISTANT_ID,
session_id=SESSION_ID,
input=MessageInput(
text=text,
intents=[RuntimeIntent(intent=intent, confidence=1.0)]
)
).get_result()
return message
Prerequisite is of course that the node is in the root branch of your dialog so it can be triggered.
The Watson Assistant service basically is used via a REST API. That API is invoked from the "Try it" pane in the workspace editor, from your dedicated application or maybe from widgets embedded into a website. The message call is used to send user input to Watson Assistant and to receive a chatbot response.
What you can do is to call the message API from your app and pass a specific term as input message. That term would match an intent and hence trigger a specific dialog node. As an example, if you have an intent "provide_feedback" defined for the phrase "user pressed feedback button" and you pass in exactly that phrase as input message, then the intent "provide_feedback" will match.

Conversation flow works in 'Try it out' but skips child node in app

I am making an app that predicts clothes the user is wearing. It uses the Visual Recognition tool, too, and for Conversation and VR to be communicating, I attach the intent 'suggestClothing' or 'clothResult' to the cloth items it found. I use an entity for Conversation to recognize the cloth items and respond accordingly.
The flow should be as follows:
User: how do I look?
-classifies clothes-
App to conversation: clothSuggest blackJacket
Conversation to user: "You picked the black jacket! Try out the green shirt with this outfit and show me how you look."
-classifies clothes-
App to conversation: clothResult blackJacket greenShirt
Conversation to user: "You look great in that outfit!"
All nodes have multiple responses as all clothes are in pairs. Either the user is wearing one or the other, and Conversation will then always suggest it's match.
Conversation flow looks like this
I also attempted this. Here sq123 is suggestClothing (first intent) and cq123 is clothResult:
This works fine in 'Try it out', too, but in the app, it immediately exits the branch on 'clothResult item1 item2' and matches with other conditions in the app.
What's the best way to optimize my flow to make it work in the app?
A typical reason why it works in "Try it out" and not in the app is that the context object is not returned properly.
When the app invokes the message method of the Watson Assistant API, a context object is passed. The calls are stateless and everything needed for Watson Assistant to continue a dialog is included in the context object. Thus, when your app retrieves the results from the message API, it needs to save the context and pass it back to Watson Assistant the next time the message method is invoked again (for that session and user).

Why my actions on Google doesn't recognize touch selection?

I'm writing a simple action on google without any webhooks. Every response is generated by dialogflow.
I have an intent that works flawless if I call it by speech or type but if I call it by selecting it from a list it doesn't work and the Default Fallback is called.
In the simulator it show the right "text" when I click on the list.
Have I done something wrong or did I need to specify something in the list?
This is how the list is generated
This is my intent
This is what happens in the simulator if I click on the "easyTravel" item in the list (it trigger the default fallback intent)
This is what happens in the simulator if I type "easyTravel" (the right intent is executed)
To catch a click on a list I need an intent configured to be triggered by the event actions_intent_OPTION
Only once I read this question - and your anwser - I could fix my own problem. Just to share: if you are using a webhook and are waiting for a webhook actions in your script, you can create a new intent, which has the event as #Edo states: actions_intent_OPTION. If you define an action in that intent, this is what will be trigger by your webhook. You can then get the parameter by (node.js):
const param = app.getSelectedOption();
Without the 'empty' intent, with the event and action, I was not receiving any input.

Roku: Launching a Premium Application Directly into a Specific Video

The Roku media player provides a RESTful API with the following commands:
query/apps This ‘query/apps’ returns a map of all the channels installed on the Roku box paired with their app id. This command is accessed via an http GET.
keydown takes an argument describing the key pressed. Keydown is equivalent to pressing down the remote key whose value is the argument passed. This command is sent via a POST with no body.
keyup takes an argument describing the key to release. Keyup is equivalent to releasing the remote key whose value is the argument passed. This command is sent via a POST with no body.
keypress takes an argument describing the key that is pressed. Keyup is equivalent to pressing down and releasing the remote key whose value is the argument passed. This command is sent via a POST with no body.
launch takes an app id as an argument and a list of url parameters that are sent to the app id as an roAssociativeArray passed the the RunUserInterface() or Main() entry point. This command is sent via a POST with no body.
After I get a query a list of applications like so:
<apps>
<app id="5127" version="1.0.28">Roku Spotlight</app>
<app id="11" version="2.2.2002">Roku Channel Store</app>
<app id="28" version="2.0.20">Pandora</app>
<app id="12" version="2.4.6">Netflix</app>
<app id="13" version="3.2.7">Amazon Instant Video</app>
<app id="2285" version="2.1.1">Hulu Plus</app>
</apps>
I want to launch the Netflix (ID 12) application into a specific TV program or Movie:
POST /launch/12?foo=bar&someVar=someValue HTTP/1.1
Where foo and someVar are variables that I would send to Netflix that would correspond to that particular piece of content. However, I don't know which variables nor which values I need to send to the premium applications.
Is there any list of params that are accepted by Netflix/Amazon/Hulu/etc?
Currently these content providers do not provide an interface for launching content externally. The best you can do is use for example, the Netflix API to add content to the user's Queue. There are several 3rd party Roku channels that do this already, specifically Instant Watch Browser and MultiQ's, both are in the Roku Channel Store.
I recently wrote a small python script which enables me to control my Roku and launch and play tv shows and movies directly within Netflix/Hulu/Amazon/etc.
It makes use of the search functionality of the External Control API, and then follows up with a scripted series of keypresses to blindly play the first search result.
It seems to be working pretty well for me, so far! I have even wired it up to my Amazon Alexa, so I can launch just about anything I want, entirely by voice!
Here's the URL to the github project, if you are interested:
https://github.com/tomchapin/roku-search-launcher