Trying to implement NEW_SURFACE in webhook response (Google Assistant vocal calling Dialogflow calling a webhook).
When I have web browser capability I display my cards, but I when to redirect the user to his phone when he is from the Google Home.
This is my error in Action On google :
{
"responseMetadata": {
"status": {
"code": 10,
"message": "Failed to parse Dialogflow response into AppResponse because of empty speech response",
"details": [
{
"#type": "type.googleapis.com/google.protobuf.Value",
"value": "{\"id\":\"28ef98e1-caec-4e1f-9a14-8fda597e8a06\",\"timestamp\":\"2018-08-17T12:31:10.735Z\",\"lang\":\"fr-ca\",\"result\":{},\"alternateResult\":{},\"status\":{\"code\":200,\"errorType\":\"success\"},\"sessionId\":\"1534509012113\"}"
}
]
}
}
}
And this is my webhook response :
{
"fulfillmentMessages": [],
"payload": {
"google": {
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "TEST CHANGE SURFACE"
}
}
]
}
},
"possibleIntents": [
{
"intent": "actions.intent.NEW_SURFACE",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValueSpec",
"context": "To show you an image",
"notificationTitle": "Check out this image",
"capabilities": [
"actions.capability.SCREEN_OUTPUT"
]
}
}
]
}
]
}
},
"source": "google"
}
Need help please ;-)
Related
Is anyone able to get "NO_INPUT" requests from AoG when once they've configured their action.json to route those requests to their server?
I'm receiving "actions.intent.CANCEL" intent requests, but not "actions.intent.NO_INPUT"
My action.json file...
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "thebigbadtest"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"talk to the Sophmora"
]
}
}
}
],
"conversations": {
"thebigbadtest": {
"name": "thebigbadtest",
"url": "https://api.oogum.io/rest/aog/agents/GYuM1j2At8eVLOqh8m1q/apprequests",
"inDialogIntents": [{
"name": "actions.intent.CANCEL"
}, {
"name": "actions.intent.NO_INPUT"
}]
}
},
"locale": "en"
}
I would like used the V2 API on my dialogflow fullfilment.
But when I self hosting an express, only v1 works, Why ?
With firebase functions both V1 and V2 works with the same code.
I apply this : https://developers.google.com/actions/reference/nodejs/lib-v1-migration,
Error for the welcome intent :
Request from simulator
{
"user": {
"userId": "ABwppHHtohp6Z0QsGp9X_TSwUK3gCxdRwCZ5w3kXR-iI-aXiUSNZR3Vuo59vocUIgP80gE2qWs2SFKk-fI6g83YJjA",
"locale": "fr-CA",
"lastSeen": "2019-02-25T15:29:15Z"
},
"conversation": {
"conversationId": "ABwppHHfpVWINKIQFvk-bzrVSvu4s-8JexXgZXP7FQ-NQ5HmPAneHtGY0u86_llCV--tj3TZpMtCMIMCZakyxc7mYQ",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Parler avec le diable"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
}
],
"requestType": "SIMULATOR"
}
reponse from simulator :
{
"responseMetadata": {
"status": {
"code": 10,
"message": "Failed to parse Dialogflow response into AppResponse because of empty speech response",
"details": [
{
"#type": "type.googleapis.com/google.protobuf.Value",
"value": "{\"id\":\"50104e9c-79ec-4545-a510-88ffd1944af7\",\"timestamp\":\"2019-02-25T15:32:35.568Z\",\"lang\":\"fr-ca\",\"result\":{},\"status\":{\"code\":206,\"errorType\":\"partial_content\",\"errorDetails\":\"Webhook call failed. Error: 400 Bad Request\"},\"sessionId\":\"ABwppHHfpVWINKIQFvk-bzrVSvu4s-8JexXgZXP7FQ-NQ5HmPAneHtGY0u86_llCV--tj3TZpMtCMIMCZakyxc7mYQ\"}"
}
]
}
}
}
Simulator :
Any idea ?
I found the issue. It was a wrong express configuration with bodyparser.
This is my code :
import express from 'express';
import { SERVER_NAME, ENTRY_POINT, PORT } from './constants';
import { app } from './app/app';
export const server = express();
server.set('port', PORT);
server.set('trust proxy', 'loopback');
server.use(express.json({}));
server.post(ENTRY_POINT, app);
server.get('/', (req, res) => {
res.send(SERVER_NAME);
});
just used express.use() instead body parser with express 4.16. It seems to me that it resolve the problem. Now V2 API with dialogflow seems works.
I want to enable account linking on an Google Assistant application using Actions Sdk.
I have already provided the information in the Account Linking section of the AoG Console :
The grant Type is Authorization code.
I use Auth0 as Oauth Server and i checked that the endpoints are functional.
When the application is invoked using the simulator, the server application send the following json response :
{
"expectUserResponse": true,
"finalResponse": null,
"expectedInputs": [{
"possibleIntents": [{
"intent": "actions.intent.SIGN_IN",
"inputValueData": null
}],
"inputPrompt": {
"richInitialPrompt": {
"items": [{
"simpleResponse": {
"textToSpeech": "Merci de vous authentifier",
"ssml": null,
"displayText": "Merci de vous authentifier"
}
}]
}
}
}],
"conversationToken": null,
"isInSandbox": false
}
I expected then to see the message like : it looks like your account … is not linked
Instead of that, the assistant immediately sends the following request to the server :
{
"user": {
"userId": "ABwppHGK6fClByrbLlS8WDM4xfY0qEck5i_kOGMhlJtuj64SjC-8qDqlH3xZ3BN7f9Yz1JDza-sc",
"locale": "fr-CA",
"lastSeen": "2018-04-23T14:12:02Z"
},
"conversation": {"conversationId": "1524493058716", "type": "NEW"},
"inputs": [{
"intent": "actions.intent.SIGN_IN",
"rawInputs": [{"inputType": "KEYBOARD"}],
"arguments": [{
"name": "SIGN_IN",
"extension": {
"#type": "type.googleapis.com/google.actions.v2.SignInValue",
"status": "ERROR"
}
}]
}],
"surface": {
"capabilities": [
{"name": "actions.capability.WEB_BROWSER"},
{"name": "actions.capability.MEDIA_RESPONSE_AUDIO"},
{"name": "actions.capability.SCREEN_OUTPUT"},
{"name": "actions.capability.AUDIO_OUTPUT"}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{"name": "actions.capability.SCREEN_OUTPUT"},
{"name": "actions.capability.AUDIO_OUTPUT"}
]
}
]
}
Did someone have the same problem ? Thanks
I have a working example, but it might be different of what you are doing because I ask the sign-in with dialogflow Integrations' tab for Google Assistant, and not explicitely with the SDK. The code is for the V2 of Dialogflow, I also have one for the V1 but it is now legacy.
This is my BONJOUR intent that is triggered when my app is launched:
const rp = require('request-promise');
app.intent('BONJOUR', (conv) => {
console.log("Debug: SAY_HELLO");
const accessToken = conv.user.access.token;
console.log("Access Token = "+accessToken);
//========Auth with OAuth website========
if (!accessToken) {
conv.ask(new SignIn());
} else {
let options = {
method: 'GET',
url: '[...]',//Oauth URL
headers:{
authorization: 'Bearer ' + accessToken,
}
};
// I use the RP lib as we need Promises for the V2 of Dialogflow.
return rp(options).then((body) => {
let data = JSON.parse(body);
console.log("auth data ="+JSON.stringify(data));
//You can access the auth data easily here
// For example if you want the name, it's in data.given_name, etc...
//Use conv.ask() to say something here
}).catch((error) => {
console.log("Error in auth request"+error);
})
}
});
Update:
This is my JSON mais the SignIn() is called, as I just tested it, it doesn't work (I created an intent that receives the actions_intent_SIGN_IN event from an example. And it never asks for sign In , I'm always in the else.)
Response {
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "PLACEHOLDER"
}
}
]
},
"userStorage": "{\"data\":{}}",
"systemIntent": {
"intent": "actions.intent.SIGN_IN",
"data": {
"#type": "type.googleapis.com/google.actions.v2.SignInValueSpec"
}
}
}
},
"outputContexts": [
{
"name": [...],
"lifespanCount": 99,
"parameters": {
"data": "{}"
}
}
]
}
Then the request after that is:
Request {
"responseId": "5a711f0e-be66-4311-b776-2085e81e9bde",
"queryResult": {
"queryText": "actions_intent_SIGN_IN",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"outputContexts": [
{
"name": "..."
},
{
"name": ".../actions_intent_sign_in",
"parameters": {
"SIGN_IN": {
"#type": "type.googleapis.com/google.actions.v2.SignInValue",
"status": "ERROR"
}
}
},
{
"name": "...",
"lifespanCount": 98,
"parameters": {
"data": "{}"
}
},
{
"name": "..."
},
{
"name": "..."
},
{
"name": "..."
},
{
"name": "..."
}
],
"intent": {
"name": "...",
"displayName": "Get Signin"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {},
"languageCode": "fr-fr"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"isInSandbox": true,
"surface": {
"capabilities": [
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
},
"inputs": [
{
"rawInputs": [
{
"inputType": "KEYBOARD"
}
],
"arguments": [
{
"extension": {
"#type": "type.googleapis.com/google.actions.v2.SignInValue",
"status": "ERROR"
},
"name": "SIGN_IN"
}
],
"intent": "actions.intent.SIGN_IN"
}
],
"user": {
"userStorage": "{\"data\":{}}",
"lastSeen": "2018-04-24T12:21:19Z",
"locale": "fr-FR",
"userId": "ABwppHHXrOc7N24RC5YS1dMvt7C-MbpzTb5TtzmufeIpGTCINVlReIMb8RKo4SGQMgBY7BUvO1qhn0B-"
},
"conversation": {
"conversationId": "1524572717282",
"type": "ACTIVE",
"conversationToken": "[\"_actions_on_google\"]"
},
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.SCREEN_OUTPUT"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
}
]
}
},
"session": "..."
}
I hope it can help you somehow.
I have webhooks configured through Dialogflow for a template chatbot UI starter project I'm making on Github. I have a bot integrated through Facebook Messenger and Google Assistant. All of Facebook works fine because the actions send back strings and it's easy to handle. But when Google Assistant tries to handle items of "#type": "type.googleapis.com/google.actions.v2.OptionValueSpec", actions_intent_OPTION is needed on the event in Dialogflow to handle the response. If I have just one in my app, it works fine, but when I add a second list item / carousel item of type OptionValueSpec, the flow chokes. I have details on the attached image. My guess is actions_intent_OPTION is needed to handle the list, but when I put that on multiple intents on the events section, the flow doesn't know how to handle it.
Comparison of Facebook Messenger (working) to Google Assistant (with bug)
Detailed full flow view of Google Assistant
Responses sent to Dialogflow that subsequently get sent to Google Actions...
Exact responses related to the UI pics above.
// working as expected
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Hey there! This is a guided tour of common components between Facebook Messenger and Google Assistant."
}
},
{
"simpleResponse": {
"textToSpeech": "You can start coding the sample project at github.com/ianrichard."
}
}
],
"suggestions": [{
"title": "Show me demos!"
},
{
"title": "Show code & docs"
}
]
}
}
// working as expected
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Animated GIFs are always fun to add to the mix!"
}
},
{
"basicCard": {
"image": {
"url": "https://somewebsite.com/colbert.gif",
"accessibilityText": "Stephen Colbert at the beginning of the show being happy."
}
}
}
],
"suggestions": [{
"title": "What about a card?"
}]
}
}
// working as expected
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Absolutely!"
}
},
{
"simpleResponse": {
"textToSpeech": "Named for a winding stretch of Hill Country highway, Devil’s Backbone is a Belgian-style tripel. Featuring a beautiful pale-golden color, this ale’s spicy hops and Belgian yeast work together to create a distinctive flavor and aroma. Don’t let the light color fool you, this one has a dark side too. Traditional Belgian brewing techniques add strength without increasing heaviness."
}
},
{
"basicCard": {
"image": {
"url": "https://somewebsite.com/devils-backbone.jpg",
"accessibilityText": "Devil’s Backbone"
},
"title": "Devil’s Backbone",
"subtitle": "Belgian-Style Tripel",
"buttons": [{
"title": "Read More",
"openUrlAction": {
"url": "https://realalebrewing.com/beers/devils-backbone/"
}
}]
}
}
],
"suggestions": [{
"title": "How about a list?"
}]
}
}
// working as expected
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "Absolutely!"
}
},
{
"simpleResponse": {
"textToSpeech": "Who’s your favorite GOT character!?"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"listSelect": {
"items": [{
"optionInfo": {
"key": "tyrion"
},
"title": "Tyrion Lannister",
"description": "Peter Dinklage",
"image": {
"url": "https://somewebsite.com/got-tyrion.jpg",
"accessibilityText": "Tyrion Lannister"
}
},
{
"optionInfo": {
"key": "daene"
},
"title": "Daenerys Targaryen",
"description": "Emilia Clarke",
"image": {
"url": "https://somewebsite.com/got-daenerys.jpg",
"accessibilityText": "Daenerys Targaryen"
}
},
{
"optionInfo": {
"key": "jon"
},
"title": "Jon Snow",
"description": "Kit Harington",
"image": {
"url": "https://somewebsite.com/got-jon.jpg",
"accessibilityText": "Jon Snow"
}
}
]
}
}
}
}
// if two events with the same actions_intent_OPTION are defined, it goes straight to the end and the list option handler is never invoked
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "The end"
}
},
{
"simpleResponse": {
"textToSpeech": "Well, that’s the end of the demo. Hope you enjoyed!"
}
}
],
"suggestions": [{
"title": "Start over"
}]
}
}
// otherwise, it will show the last carousel
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "I drink and I know things!"
}
},
{
"simpleResponse": {
"textToSpeech": "What are you going to buy your wife from Tiffany?"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [{
"optionInfo": {
"key": "sunglasses"
},
"title": "Aviator Sunglasses",
"description": "$360",
"image": {
"url": "https://somewebsite.com/tiffany-glasses.jpg",
"accessibilityText": "Aviator Sunglasses"
}
},
{
"optionInfo": {
"key": "ring"
},
"title": "Infinity Ring",
"description": "$200",
"image": {
"url": "https://somewebsite.com/tiffany-ring.jpg",
"accessibilityText": "Infinity Ring"
}
},
{
"optionInfo": {
"key": "earrings"
},
"title": "Soleste Earrings",
"description": "$5,600",
"image": {
"url": "https://somewebsite.com/tiffany-earrings.jpg",
"accessibilityText": "Soleste Earrings"
}
},
{
"optionInfo": {
"key": "pendant"
},
"title": "Infinity Pendant",
"description": "$250",
"image": {
"url": "https://somewebsite.com/tiffany-necklace.jpg",
"accessibilityText": "Infinity Pendant"
}
},
{
"optionInfo": {
"key": "watch"
},
"title": "East West Mini",
"description": "$7,500",
"image": {
"url": "https://somewebsite.com/tiffany-watch.jpg",
"accessibilityText": "East West Mini"
}
}
]
}
}
}
}
// but the carousel option handler isn't processed correctly :( - keeps repeating this same thing.
{
"richResponse": {
"items": [{
"simpleResponse": {
"textToSpeech": "What!? None of them?"
}
},
{
"simpleResponse": {
"textToSpeech": "What are you going to buy your wife from Tiffany?"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [{
"optionInfo": {
"key": "sunglasses"
},
"title": "Aviator Sunglasses",
"description": "$360",
"image": {
"url": "https://somewebsite.com/tiffany-glasses.jpg",
"accessibilityText": "Aviator Sunglasses"
}
},
{
"optionInfo": {
"key": "ring"
},
"title": "Infinity Ring",
"description": "$200",
"image": {
"url": "https://somewebsite.com/tiffany-ring.jpg",
"accessibilityText": "Infinity Ring"
}
},
{
"optionInfo": {
"key": "earrings"
},
"title": "Soleste Earrings",
"description": "$5,600",
"image": {
"url": "https://somewebsite.com/tiffany-earrings.jpg",
"accessibilityText": "Soleste Earrings"
}
},
{
"optionInfo": {
"key": "pendant"
},
"title": "Infinity Pendant",
"description": "$250",
"image": {
"url": "https://somewebsite.com/tiffany-necklace.jpg",
"accessibilityText": "Infinity Pendant"
}
},
{
"optionInfo": {
"key": "watch"
},
"title": "East West Mini",
"description": "$7,500",
"image": {
"url": "https://somewebsite.com/tiffany-watch.jpg",
"accessibilityText": "East West Mini"
}
}
]
}
}
}
}
Intent References
FINALLY works! Thanks #Prisoner!
In Dialogflow...
First list, define the output context
Second list, define the input context and output context
Question after the second list, define the input context
In your webhook...
(The missing piece of the puzzle that made it work)
Set the output context for the list options
{
"speech": "",
"displayText": "",
"data": { "google": { ... } },
"contextOut": [
{
"name": "carouselExample",
"lifespan": 0,
"parameters": null
}
]
}
Broadly speaking, the problem is that Dialogflow doesn't know where in the conversation you are when it gets the action_intent_OPTION event. For that event, it doesn't try to do Entity matching, but the issue of conversational context is a problem in general (what happens, for example, if you have two different option carousels which have overlapping answers?).
The solution is twofold:
When you send back the response that includes the option information, you should also set an outgoing Context. You can include other information in this Context, but in your case it sounds mostly like you just want to keep track of where you are in the conversation.
You can then differentiate the two Intents with the option event by specifying which context each should be triggered for. Dialogflow will match both the event and the context to determine the best Intent to use.
I am trying to use the REST message to create an instance on Google Compute Engine.
I use the exact same rest message that I get from the REST link on the standard console page.
POST <post url>
{
"name": "instance-1",
"zone": "projects/service-now-16699/zones/us-east1-c",
"machineType": "projects/service-now-16699/zones/us-east1-c/machineTypes/n1-standard-1",
"metadata": {
"items": []
},
"tags": {
"items": [
"http-server",
"https-server"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": true,
"mode": "READ_WRITE",
"autoDelete": true,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "<image-url>",
"diskType": "projects/service-now-16699/zones/us-east1-c/diskTypes/pd-ssd",
"diskSizeGb": "10"
}
}
],
"canIpForward": false,
"networkInterfaces": [
{
"network": "projects/service-now-16699/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "",
"scheduling": {
"preemptible": false,
"onHostMaintenance": "MIGRATE",
"automaticRestart": true
},
"serviceAccounts": [
{
"email": "service-now-16699#appspot.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
]
}
However, when I execute I am getting the following error:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Required field 'resource.name' not specified"
}
],
"code": 400,
"message": "Required field 'resource.name' not specified"
}
}
I can't find any reference to 'resource.name'.
Any suggestions?
Thanks,
Saurabh