If I say
ask AutoVoice to expense $25 for test category test
or any command with 7 words or more, Google Assistant will not call my service but will instead do a normal search.
If I say something shorter like
ask Autovoice to say hello there
It'll call my Action on Google.
I expected the assistant to call my action when either a short or long command is used.
Related
I'm developing a Google Action through DialogFlow and a webhook (that will run on a Nest Hub) that I want to act like this:
the user invokes the action "Hey Google, talk to ACTIONAME"
through the Default Welcome Intent ("hooked" to my web service) the Action replies to the user and open a website
app.intent('Default Welcome Intent', conv => {
conv.ask('Hi! I'm opening your site')
conv.ask(new HtmlResponse({
url: 'https://MY_IOT_SITE'
}))
})
now, the user could be "silent" for mins or hours, but I'd like to prevent Google Actions to close the ACTIONNAME and return to the clock, while until now the action closes after a couple of minutes
Is it somehow possible?
Thank you.
This is not possible. The platform intentionally places an upper bound on how long an action can run without any user input. This is done so that an action cannot occupy the device longer than expected and prevent future inputs from unintentionally getting routed to your action rather than the Google Assistant.
You can take a look at additional guidelines when developing your web app.
Since your question refers to an IoT-related website, you may want to take a look at the Smart Home reference, which provides an alternative way to let users control smart home devices with their voice or built-in graphical widgets.
Certain keywords are exiting my action.
First of all, I'm aware that there are built-in/system intents (docs). I know that there are conversation exit keywords that trigger actions.intent.CANCEL and exit the action/conversation (eg. "exit", "cancel", "stop". See docs).
However, I cannot find any documentation referencing the keywords that are exiting my action (and any new one I create).
Keywords like "thank you", "help", "call center", consistently exit my action and submit the user query globally to Google Assistant. For example, inputting "thank you" during a conversation in my action will exit (ie. " has left the conversation") and Google Assistant will answer me with something like "That's what I was built for!". Testing in the simulator shows this exit as an invocation error (See screenshot_1), testing with Google Assistant on mobile shows the exit from action and answer from GA (See screenshot_2).
Why is this happening instead of a fallback intent?
This seems to be a case of no-match yielding where some queries that does not match your Action will be executed by the Assistant instead.
If you do want to handle these queries yourself, you can add an intent that matches freeform text. Though this may come at a detriment to the user experience if they did intend for the Assistant to answer.
I'm building a chatbot on Telegram with Watson Assistant and Node RED. I need to take a date and time from the user for booking an appointment, so I used a slot that require the insertion of the two information. Using the trial chatbot offered by Watson, I have no problem with slot; but using Node RED, I can't go beyond entering the date. Through the debug, I saw that after entering the date, then after running the first slot, this error is returned "msg.payload.content is empty". Moreover, going to see the body of the output message returned by the assistant, the msg.payload.output.generic field is empty. On the other hand, it should contain the response of the assistant who requests, after having entered the date, also to insert the time. It seems that the bot is stuck on entering the date, but in reality I don't think so, because in the trial chatbot, it works perfectly.
What could be the problem?
Neither the Assistant V1 nor the V2 set or look at msg.payload.content. On input they look at msg.payload and on exit they assign the response from Watson Assistant to msg.payload.
If you are getting a "msg.payload.content" is empty error, then that will be happening somewhere in your flow. Most likely at the end where you are trying to process the response. If msg.payload.content is empty then the assistant dialog is not returning any output. This is strange as it should be returning the prompt for the currently empty slot.
What does msg.payload look like?
Are you using the V1 or the V2 node, and which version of node-red-node-watson nodes are you using? You can tell, by going to the palette.
Both V1 and V2 nodes, however, have been tested with slots, and the response does end up on msg.payload.content. Current released version of node-red-node-watson is 0.9.0.
I have a website which links to a chatbot built on IBM Watson Assistant. There are some hyperlinks on the website that I want to trigger specific nodes/ intents the watson dialog.
Example: User clicks on "Provide feedback" link, the watson chatbot launches and based on the link the "provide_feedback" intent is recognised (thus preventing the user from needing to specify the intent after clicking the link).
Has anyone tried this before?
I also came across this requirement and want to mention another alternative here:
Instead of sending an input text that matches the intent of your desired node, you can also pass
Intents to use when evaluating the user input.doc
and tell the assistant to match it with confidence of 1.0.
I think this is a clean method, because you don't need to deal with disambiguation of your input text.
Then you don't need to send input text at all and the intent actually does not even need example phrases :-)
For example if you want to trigger a node that has the intent #provide_feedback
you can call this python example code:
send_message_to_chatbot(text="", intent="provide_feedback")
def send_message_to_chatbot(text="", intent=""):
message = assistant.message(
assistant_id=ASSISTANT_ID,
session_id=SESSION_ID,
input=MessageInput(
text=text,
intents=[RuntimeIntent(intent=intent, confidence=1.0)]
)
).get_result()
return message
Prerequisite is of course that the node is in the root branch of your dialog so it can be triggered.
The Watson Assistant service basically is used via a REST API. That API is invoked from the "Try it" pane in the workspace editor, from your dedicated application or maybe from widgets embedded into a website. The message call is used to send user input to Watson Assistant and to receive a chatbot response.
What you can do is to call the message API from your app and pass a specific term as input message. That term would match an intent and hence trigger a specific dialog node. As an example, if you have an intent "provide_feedback" defined for the phrase "user pressed feedback button" and you pass in exactly that phrase as input message, then the intent "provide_feedback" will match.
I am making an app that predicts clothes the user is wearing. It uses the Visual Recognition tool, too, and for Conversation and VR to be communicating, I attach the intent 'suggestClothing' or 'clothResult' to the cloth items it found. I use an entity for Conversation to recognize the cloth items and respond accordingly.
The flow should be as follows:
User: how do I look?
-classifies clothes-
App to conversation: clothSuggest blackJacket
Conversation to user: "You picked the black jacket! Try out the green shirt with this outfit and show me how you look."
-classifies clothes-
App to conversation: clothResult blackJacket greenShirt
Conversation to user: "You look great in that outfit!"
All nodes have multiple responses as all clothes are in pairs. Either the user is wearing one or the other, and Conversation will then always suggest it's match.
Conversation flow looks like this
I also attempted this. Here sq123 is suggestClothing (first intent) and cq123 is clothResult:
This works fine in 'Try it out', too, but in the app, it immediately exits the branch on 'clothResult item1 item2' and matches with other conditions in the app.
What's the best way to optimize my flow to make it work in the app?
A typical reason why it works in "Try it out" and not in the app is that the context object is not returned properly.
When the app invokes the message method of the Watson Assistant API, a context object is passed. The calls are stateless and everything needed for Watson Assistant to continue a dialog is included in the context object. Thus, when your app retrieves the results from the message API, it needs to save the context and pass it back to Watson Assistant the next time the message method is invoked again (for that session and user).