Tapping on google assistant list always fires fallback intent for the first time - actions-on-google

My Dialogflow agent is using an 'Actions on Google Rich Message' List response object to display options on the Google Assistant platform.
The list options work perfectly when testing on the Dialogflow console. However, when testing through the Google Assistant Simulator or Google Assistant app on a mobile device, the list option does not work on the first try. It works only when selecting an option the second time. Below is my intent code which generates the list.
app.intent('Default Welcome Intent', conv => {
conv.ask('Hi welcome to micro strategy. I am Emily, your virtual assistant. Please tell me how can I help you');
conv.ask(new List({
title: 'Please choose',
items: {
['SELECTION_KEY_GET_CALENDAR_EVENTS']: {
synonyms: [
'Get calendar events',
],
title: 'Get calendar events',
description: 'Lets you retrieve calendar events',
},
['SELECTION_KEY_MODIFY_EVENTS']: {
synonyms: [
'Modify calendar events',
],
title: 'Modify calendar events',
description: 'Lets you modify calendar events'
},
},
}));
});
Any guidance would be appreciated.

This is because you must have an intent that handles the actions_intent_OPTION event, which is what is fired when you touch for the first time an element in the list.
Lists / Carousel always fire that event. If no intents can handle the actions_intent_OPTION event, then the conversation goes to the Fallback intent.
Refer to the documentation, section List > Requirements > Interactions > Voice/Text : Must have an intent for touch input that handles the actions_intent_OPTION event.
Let me know if it helps, Marco

Related

How to add suggestion chips in webhook with actions sdk

I am building an action with Google Actions SDK, and for a scene I am using a webhook for onEnter. My question is how do I add suggestion chips using the webhook function.
This is my webhook
app.handle('startSceneOnEnter',conv=>{
conv.add('its the on enter of the scene');
});
I could not find how to add suggestion chips with conversation, any help would be great.
A suggestion chips can be added with the Suggestion class.
const {Suggestion} = require('#assistant/conversation');
app.handle('startSceneOnEnter', conv => {
conv.add('its the on enter of the scene');
conv.add(new Suggestion({ title: 'My Suggestion' }));
});

Google Actions - Invoking Same Intent Twice Freezes

I am developing an action with Dialogflow and actions-on-google library for Node.js. First, I open the action and then I say, for example: "Bring Pizza" which invokes the 'Pizza' intent. If I say again "Bring Pizza" (in a screen device) action freezes. Happens with any intent I have. Text goes up and action just stops. Note: This only happens with real screen devices, not happening on my phone or device simulator.
My welcome intent just asks a question
// Welcome Intent
agent.intent('Open', async (conv) => {
conv.ask("To test, say: Bring Pizza")
});
Pizza Intent invoked when saying "Bring pizza"
// Pizza Intent
agent.intent('Pizza', async (conv) => {
conv.ask("Say bring pizza again"); //Just for test purpose
});
Intents in Dialogflow are fulfilled with a webhook intent call to a Firebase function. Pizza intent in DialogFlow. Open Intent in DialogFlow

How to Trigger a followup intent when user clicks on a dialogflow button link

I have used the dialogflow one click integration for telegram. how can i trigger a follow up intent when a user clicks on a button link that redirects to an external site?
agent.add(
new Card({
title: `Please click the link below and input your Password.`,
buttonText: "Click to input your password",
buttonUrl: "https://password.com/",
meta: "run intent"
})
);
I added a meta key, but this did not work.
Thanks

How to wrap an existing chatbot for Google Assistant (Google Home)

We have a chatbot for our website today, that is not build using Google technology. The bot has a JSON REST API where you can send the question to and which replies with the corresponding answers. So all the intents and entities are being resolved by the existing chatbot.
What is the best way to wrap this functionality in Google Assistant / for Google Home?
To me it seems I need to extract the "original" question from the JSON that is send to our webservice (when I enable fullfilment).
But since context is used to exchange "state" I have to find a way to exchange the context between the dialogflow and our own chatbot (see above).
But maybe there are other ways ? Can it (invoke our chatbot) be done directly (without DialogFlow as man in the middle) ?
This is one of the those responses that may not be enough for someone who doesn't know what I am talking about and too much for someone who does. Here goes:
It sounds to me as if you need to build an Action with the Actions SDK rather than with Dialog flow. Then you implement a text "intent" in your Action - i.e. one that runs every time the user speaks something. In that text intent you ask the AoG platform for the text - see getRawInput(). Now you do two things. One, you take that raw input and pass it to your bot. Two, you return a promise to tell AoG that you are working on a reply but you don't have it yet. Once the promise is fulfilled - i.e. when your bot replies - you reply with the text you got from your bot.
I have a sample Action called the French Parrot here https://github.com/unclewill/french_parrot. As far as speech goes it simply speaks back whatever it hears as a parrot would. It also goes to a translation service to translate the text and return the (loose) French equivalent.
Your mission, should you choose to accept it, is to take the sample, rip out the code that goes to the translation service and insert the code that goes to your bot. :-)
Two things I should mention. One, it is not "idiomatic" Node or JavaScript you'll find in my sample. What can I say - I think the rest of the world is confused. Really. Two, I have a minimal sample of about 50 lines that eschews the translation here https://github.com/unclewill/parrot. Another option is to use that as a base and add code to call your bot and the Promise-y code to wait on it to it.
If you go the latter route remove the trigger phrases from the action package (action.json).
So you already have a Backend that process user inputs and sends responses back and you want to use it to process a new input flow (coming from Google Assistant)?
That actually my case, I've a service as a Facebook Messenger ChatBot and recently started developing a Google Home Action for it.
It's quite simple. You just need to:
Create an action here https://console.actions.google.com
Download GActions-Cli from here https://developers.google.com/actions/tools/gactions-cli
Create a JSON file action.[fr/en/de/it].json (choose a language). The file is your mean to define your intents and the URL to your webhook (a middleware between your backend and google assistant). It may look like this:
{
"locale": "en",
"actions": [
{
"name": "MAIN",
"description": "Default Welcome Intent",
"fulfillment": {
"conversationName": "app name"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"Talk to app name"
]
}
}
}
],
"conversations": {
"app name": {
"name": "app name",
"url": "https://your_nodejs_middleware.com/"
}
}
}
Upload the JSON file using gactions update --action_package action.en.json --project PROJECT_ID
AFAIK, there only a Node.js client library for Actions-on-google https://github.com/actions-on-google/actions-on-google-nodejs that why you need a Node.js middleware before hitting your backend
Now, user inputs will be sent to your Node.js middleware (app.js) hosted at https://your_nodejs_middleware.com/ which may look like:
//require express and all required staff to build a Node.js server,
//look on internet how to build a simple web server in Node.js
//if you a new to this domain. const {
ActionsSdkApp } = require('actions-on-google');
app.post('/', (req, res) => {
req.body = JSON.parse(req.body);
const app = new ActionsSdkApp({
request: req,
response: res
});
// Create functions to handle requests here
function mainIntent(app) {
let inputPrompt = app.buildInputPrompt(false,
'Hey! Welcome to app name!');
app.ask(inputPrompt);
}
function respond(app) {
let userInput = app.getRawInput();
//HERE you get what user typed/said to Google Assistant.
//NOW you can send the input to your BACKEND, process it, get the response_from_your_backend and send it back
app.ask(response_from_your_backend);
}
let actionMap = new Map();
actionMap.set('actions.intent.MAIN', mainIntent);
actionMap.set('actions.intent.TEXT', respond);
app.handleRequest(actionMap); });
Hope that helped!
Thanks for all the help, the main parts of the solution are already given, but I summarize them here
action.json that passes on everything to fullfilment service
man in the middle (in my case IBM Cloud Function) to map JSON between services
Share context/state through the conversationToken property
You can find the demo here: Hey Google talk to Watson

display map in facebook messenger bot

I need to show map in facebook messenger bot. According to documentation following code should work but I only see message and place icon. Has anyone faced similar issue.
var messageData = {
recipient: {
id: userId
},
message: {
text: msg,
metadata: "DEVELOPER_DEFINED_METADATA",
quick_replies: [
{
"content_type": "location"
}
]
}
};
callSendAPI(messageData);
}
I was able to use this functionality without a problem on the IOS messenger app. The 'location' feature allows the user to send a their location to the bot. By default, it showed a map with 'Your Location' and 'Tap to view on map'.
I tried from the desktop and it gave an error indicating that locations are only usable in the app.
The bot is running at DMS Software Bot. Type 'quick reply' and hit location.
The source is at FB-Robot on github.