How to identify health check request? - actions-on-google

My app is getting health check about 10 health check request per hour, and that makes my conversation log messy.
Because of the health check does not have screen capability, our backend server response the request as Google Home is requesting.
Is there any way to detect if the request is health check request or not?

For starters, you should be responding as if it was a Google Home. You have to respond with valid output, or it will reject you. So don't try to be too fancy in your response - just use this to avoid cluttering your analytics and logs.
The health check will look like a normal welcome request. The ping will contain an argument named is_health_check with a boolValue of true and a textValue of 1. If you're using Dialogflow, this will be one of the arguments at originalRequest.data.inputs[0]. For the Actions SDK, it will be at data.inputs[0].
Here is a partial sample from Dialogflow:
{
"originalRequest": {
"source": "google",
"version": "2",
"data": {
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
},
"inputs": [
{
"rawInputs": [
{
"query": "Sample",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "1",
"name": "is_health_check",
"boolValue": true
}
],
"intent": "actions.intent.MAIN"
}
],
...
}

Related

Messenger bot with dialogflow does not respond to user

I have a messenger bot built with google's dialogflow engine, to respond the user I have a webhook. when I send a message with a "hello", the default welcome intent its triggered and the webhook it its called and it responds to me.
After that I need the products of my business so I write "Comprar" (Buy), the Products intent it is triggered and the webhook responds with a fulfillment message with a list of available products.
In the history section in google dialogflow, I can see the json respond of the wekhook with the message that the user will receive.
But facebook does not show the message. I use the same webhook for many intents, and it is not working in this case.
this is the respond I got in the History section for this case:
{
"id": "f75002cd-2f8a-422a-960d-dd2c9e00b490-74fe87bc",
"fulfillmentText": "",
"language_code": "es",
"queryText": "Comprar",
"webhookPayload": {},
"intentDetectionConfidence": 1,
"action": "GET_CATEGORIES",
"webhookSource": "",
"parameters": {},
"fulfillmentMessages": [
{
"quickReplies": {
"title": "Escoge alguna de estas Categorías:",
"quickReplies": [
"Volver",
"",
"FIRMA ELECTRONICA"
]
}
}
],
"diagnosticInfo": {
"webhook_latency_ms": "732.0"
},
"webhookStatus": {
"webhookStatus": {
"message": "Webhook execution successful"
},
"webhookUsed": true
},
"outputContexts": [
{
"lifespanCount": 1,
"name": "vercatalogo-followup",
"parameters": {}
}
],
"intent": {
"isFallback": false,
"displayName": "ver.catalogo",
"id": "41dcecf2-11b6-4e50-8294-b86d849093e1"
}
}
I solved lol
Apparently quickReplies must not contain a empty string ("") as a quick reply, that option is not valid. So I removed it and it worked.

Actions-On-Google[Permission Intent] Get User Location / Name

const agent = new WebhookClient({request, response});
const {WebhookClient} = require('dialogflow-fulfillment');
const {Text, Card, Image, Suggestion, Payload} = require('dialogflow-fulfillment');
let payload = {
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
};
agent.add('PLACEHOLDER_FOR_PERMISSION');
agent.add(new Payload(PLATFORMS.ACTIONS_ON_GOOGLE, payload));
Simple payload to get UserLocation and Name using the PERMISSION intent
The response to the above I get it
To deliver your order, I'll need to get your name and street address
from Google. Is that ok?
Follow intent to this intent is also set up with event actions_intent_PERMISSION in it
I have been trying to solve this for 2 days by trying to fire actions_intent_PERMISSION using suggestion chips etc but nothing happens post this ?
Where am I going wrong I am not able to comprehend. There is some silly mistake somewhere - please if someone can point it out - would help a lot.
Thanks
===========EDIT============IMAGES ATTACHED FOR THE INTENTS============
permissions intent
permissions response with event actions_intent_PERMISSION
edit: cant embed images because of points. above links are there. thanks
=================================request-response json===================
when the intent permissions is triggered below is the request
{
"responseId": "54a4be35-3d0b-4cc8-b036-46fab0d09361",
"queryResult": {
"queryText": "permissions",
"action": "permissions",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [
{
"text": {
"text": [
""
]
}
}
],
"intent": {
"name": "projects/projectid/agent/intents/95237653-0af0-4d0c-9101-0cd8ee0db186",
"displayName": "permissions"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {},
"languageCode": "en"
},
"originalDetectIntentRequest": {
"payload": {}
},
"session": "projects/projectid/agent/sessions/13213e7f-dba5-4d0c-979a-f626f7ac4691"
}
fulfillment response
{
"conversationToken": "[]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {},
"possibleIntents": [
{
"intent": "actions.intent.PERMISSION",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
],
"optContext": "To locate you"
}
}
],
"speechBiasingHints": [
"$name-type",
"$sports",
"$gender"
]
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true,
"intent": "95237653-0af0-4d0c-9101-0cd8ee0db186"
}
}
}
The response from simulator for permissions intent
=======================================================================
Issue has been resolved
Points to note:
Actions-on-Google simulator works very weirdly to be trusted upon
whether your webhook is working or not
Promise resolution was an
issue - agent was waiting for a promise to be resolved before that
it was getting passed to next time
Correct way to test your bot
is to publish your bot in ALPHA on the assistant directory rather
than testing on the Simulator because it is very unstable in terms
of you can never predict its behavior. Will never tell you the
correct error to be debugged and will abruptly stop working for no
reason whatsoever

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

IBM Watson. How to pass context from node to node?

I"m trying to string together multiple IBM Watson requests:
Request #1: Play music.
Watson responds with the following:
{
"intents": [
{
"intent": "turn_on",
"confidence": 0.9498470783233643
}
],
"entities": [
{
"entity": "appliance",
"location": [
5,
10
],
"value": "radio",
"confidence": 1
}
],
"input": {
"text": "play music"
},
"output": {
"text": [
"What kind of music would you like to hear?"
],
"nodes_visited": [
"node_1_1510258504338",
"node_2_1510258615227"
],
"log_messages": []
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
Request #2: The patron would type rock.
My problem is that I'm getting an error message that states the following
No dialog node matched for the input at a root level. (and there is 1 more warning in the log)",
"log_messages": [
I'm pretty sure I have to pass a context into the 2nd request but I'm not sure what I need to include. Right now I'm only passing in the conversation_id. is there something specific from the above response that I need to pass in? For example, I'm passing this:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4"
}
}
You send back your whole context object. In this case it would be:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
But there are SDK's that will make this easier for you.
https://github.com/watson-developer-cloud
Your node that actions the type of music people select, is it a child of your 'turn_on' node [node_2_1510258615227]?
If so, as Simon demonstrates above, you need to also pass back as part of the API call the complete context packet. This informs Watson Conversation where in the dialog flow you were last. As the conversation system is state free, i.e. It does not store any state information about individual conversations, it will not by default know where it is within a conversation. This is why you then need to return the context element of the previous response, to allow watson to know where you were in the conversation flow.
Your error above states that Watson looked down the list of dialog nodes you have defined at your root level, and could not find a matching condition. Due to the fact your matching condition was within a child node.

Actions on Google Smart Home Skill: Room Hint / Home Graph

The documentation at https://developers.google.com/actions/smarthome/create-app#actiondevicessync mentions that the roomHint field of the JSON response to the sync request can be used to have Google automatically assign devices to correct rooms.
However, no matter what I return in that field, the user still has to manually assign every device to a room and I cannot get Google to automatically recognize the correct room using this roomHint field
Here's an example response:
{
"requestId": "500166151965294748",
"payload": {
"devices": [
{
"id": "9",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Light"
},
"willReportState": false,
"roomHint": "Attic"
}
]
}
}
Right now supplying a value for the roomHint is not used by the HomeGraph to determine which room this device is in.