Extra period appended to the context while surface shift - actions-on-google

When there is a surface shift (actions.intent.NEW_SURFACE), if the context property is ended by a punctuation other than a period, such as ? ! etc, Google adds an additional period.

When you said context property, do you mean context name? You can learn more about the rules of context naming in this documentation. Punctuations like ?! are not valid.

In the Hub, it will display "... so we can keep talking!.". See extra period after !.
{
"expectUserResponse": true,
"expectedInputs": [
{enter code here
"possibleIntents": [
{
"inputValueData": {
"capabilities": [
"actions.capability.SCREEN_OUTPUT"
],
"context": "Ok, to continue, I'll need you to look at a screen. Make sure your notifications are on, so we can keep talking!",
"notificationTitle": "Click this notification to continue",
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValueSpec"
},
"intent": "actions.intent.NEW_SURFACE"
}
]
}
],
"isInSandbox": false
}

Related

How to use roomHint and structureHint with smarthome actions on Google

We are currently setting un a Smarthome action, and we would like to provide roomHint on the first sync (not on request sync) as it's really tedious to set up rooms on the first sync, but it does not work.
We tried to name rooms in english and also in italian, (as it's not really clear from the documentation if there is a list on room names that we can use?) but no way.
So can you please give us a hint how to use the roomHint field?
Also in the API doc we've found structureHint, does it work? The documentation for SYNC intent does not mention this field.
Here is our SYNC intent with one device and room, we took office from the example JSON:
{
"requestId": "3582198904737125163",
"payload": {
"agentUserId": "xyz#qwertyz.com",
"devices": [
{
"id": "deviceID",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Lampadina",
"defaultNames": [
"Lampadina_XYZ"
],
"nicknames": [
"Lampadina"
]
},
"willReportState": false,
"customData": {
"modelType": "DEVICE"
},
"roomHint": "office"
}
]
}
}
Thanks
Unfortunately, I believe the structureHint is only in the HomeGraph API sync response.
It cannot be used in the Sync intent.
If someone can tell me I'm wrong and how to use it, you'd be a hero.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Multi-Part answers for a node in IBM Watson Conversation service

I would like to be able to split up an response into multiple messages that are sent in a row - e.g. "Hi, i am a chatbot", "What can i do for you?" as 2 separate messages (that a client would render individually)
Is there a way to do this without separate child-nodes?
e.g.
{
"output": {
"text": {
"append": true,
"values": [
"Hi, i am a chatbot",
"What can i do for you"
],
"selection_policy": "sequential"
}
You can simply use
{
"output": {
"text": [
"Hi, i am a chatbot",
"What can i do for you"
],
"selection_policy": "sequential"
}
}
Remove Values from the json.
If you mean with "chained response" several messages that are printed below each other, you can simply use \n to denote a carriage return:
{
"output": {
"text": {
"append": true,
"values": [
"Hi, I am a chatbot.\nWhat can I do for you?"
],
"selection_policy": "sequential"
}
This would render to:
Hi, I am a chatbot.
What can I do for you?
I have a couple such reponses in this workspace. It is part of this tutorial for building a database-backed Slackbot. In that example, several of those output lines are computed dynamically and would, e.g., produce a formatted ASCII table.

IBM Watson. How to pass context from node to node?

I"m trying to string together multiple IBM Watson requests:
Request #1: Play music.
Watson responds with the following:
{
"intents": [
{
"intent": "turn_on",
"confidence": 0.9498470783233643
}
],
"entities": [
{
"entity": "appliance",
"location": [
5,
10
],
"value": "radio",
"confidence": 1
}
],
"input": {
"text": "play music"
},
"output": {
"text": [
"What kind of music would you like to hear?"
],
"nodes_visited": [
"node_1_1510258504338",
"node_2_1510258615227"
],
"log_messages": []
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
Request #2: The patron would type rock.
My problem is that I'm getting an error message that states the following
No dialog node matched for the input at a root level. (and there is 1 more warning in the log)",
"log_messages": [
I'm pretty sure I have to pass a context into the 2nd request but I'm not sure what I need to include. Right now I'm only passing in the conversation_id. is there something specific from the above response that I need to pass in? For example, I'm passing this:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4"
}
}
You send back your whole context object. In this case it would be:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
But there are SDK's that will make this easier for you.
https://github.com/watson-developer-cloud
Your node that actions the type of music people select, is it a child of your 'turn_on' node [node_2_1510258615227]?
If so, as Simon demonstrates above, you need to also pass back as part of the API call the complete context packet. This informs Watson Conversation where in the dialog flow you were last. As the conversation system is state free, i.e. It does not store any state information about individual conversations, it will not by default know where it is within a conversation. This is why you then need to return the context element of the previous response, to allow watson to know where you were in the conversation flow.
Your error above states that Watson looked down the list of dialog nodes you have defined at your root level, and could not find a matching condition. Due to the fact your matching condition was within a child node.

Amazon Alexa Music Routine

I just picked up an Amazon Echo Dot for christmas, and I wanted to be able to start a routine on christmas morning, where it would play christmas music and turn on the lights, currently you can only make routines in the app like turn on smart home devices, say the weather and stuff like that, but it wont let you play music, or a playlist. Im not much of a coder, but i looked into the amazon developer site that lets you create skills for the echo dot, and i just cant figure out how to make this work.
heres the code I have so far
{
"intents": [
{
"intent": "AMAZON.HelpIntent"
},
{
"intent": "AMAZON.StopIntent"
},
{
"intent": "AMAZON.PauseIntent"
},
{
"intent": "AMAZON.ResumeIntent"
},
{
"intent": "Start",
"slots": [
{
"name": "Play",
"type": "AudioPlayer.Play",
"audioItem":{
"stream": {
"url": "https://music.amazon.com/albums/B002R4OU2Q?do=play&ref=dm_ws_dp_ald_bb_phfa_xx_xx",
"token": "string",
"expectedPreviousToken": "string",
"offsetInMilliseconds": 0
}
}
}
]
},
{
"intent": "Christmas"
},
{
"intent": "Begin"
}
]
}
basically, I say Start Christmas Morning, the playlist plays, and the lights turn on.
whenever I try to save this I get this Error: There was a problem with your request: Unknown slot type 'AudioPlayer.Play' for slot 'Play'
but audioplayer.play does exist right here https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#config
anyone know how to get this to work? or if there is an easier way for what im trying to do?
I'm not a programmer at all, but it appears you're missing one line. Go back to example you used and look for
"shouldEndSession": true
Without using this it appears that the echo will just wait for user response without playing anything.