Microphone closing on Nest Hub - other devices work - actions-on-google

We use the Conversational Actions SDK and our action (an interactive audio book) works in the Actions console and on the iPhone Google Assistant app without any problem.
However, on Nest Hub devices its behavior is completely different: it takes very long before it starts playing the SSML audio and after nearly every answer the microphone is closed, so that you have to say "OK, Google" again. That really kills the flow of the game. As everything works fine in the console we have a very hard time debugging this issue.
That is an example for a response that we send to the webhook request:
{
"user": {
"params": {
"id": "google-d1d76b00-e220-11ea-bf59-123456789"
}
},
"scene": {
"next": {
"name": "GameFlow"
},
"slots": {
"GameFlowResponse": {
"mode": "REQUIRED",
"status": "SLOT_UNSPECIFIED"
}
}
},
"prompt": {
"firstSimple": {
"text": "Some text to be displayed",
"speech": "<speak><audio src=\"https://some.audio.url\">Some text</audio><break time=\"500ms\"/><audio src=\"https://another.audio.url\">some text</audio></speak>"
}
}
}
Did maybe anyone experience something similar to that? As I said, I am running out of knowledge how to debug this.

Related

TemperatureSetting "What's the temperature in my living room ?" sometimes answers weird stuff

I recently released a Smart Home action for interacting with our thermostats. This device type is Thermostat and the single implemented trait is TemperatureSetting.
When I ask my home assistant "What's the temperature in the living room ?", sometimes it answers good, sometimes it answers "Ok, this device has been updated" (this is a translation from my language so it may not be the accurate answer).
Checking the logs on my server, I can see the QUERY intent responses being made.
The following QUERY response provokes a "The temperature in your living room is 20.5 degrees" answer:
{
"aa0c2504f896": {
"status": "SUCCESS",
"online": true,
"thermostatMode": "heat",
"thermostatTemperatureAmbient": 20.7,
"thermostatTemperatureSetpoint": 20.5
}
}
This one provokes "Ok, this device has been updated" :
{
"aa0c2504f896": {
"status": "SUCCESS",
"online": true,
"thermostatMode": "heat",
"thermostatTemperatureAmbient": 20.7,
"thermostatTemperatureSetpoint": 19.5
}
}
I can reproduce it by changing the setpoint 1° lower, as shown by the previous logs. When doing so, the EXECUTE intent sends a ReportState request, so the HomeGraph API is updated synchronously.
Am I doing something wrong ?
Edit:
It happens every time the setpoint and the ambient temperature are different (in a 0.5 degrees interval). The Google Home UI always renders it correctly, it is the voice control that answers something wrong.
Here is also my SYNC payload :
{
"requestId": "5878230358273544341",
"payload": {
"agentUserId": "3101066d-b012-4780-8d77-7297aaea4e37",
"devices": [
{
"id": "aa0c2504f896",
"type": "action.devices.types.THERMOSTAT",
"traits": [
"action.devices.traits.TemperatureSetting"
],
"name": {
"defaultNames": [
"Thermostat COMAP"
],
"name": "Thermostat salon"
},
"willReportState": true,
"attributes": {
"availableThermostatModes": [
"heat",
"on"
],
"thermostatTemperatureRange": {
"minThresholdCelsius": 4,
"maxThresholdCelsius": 30
},
"thermostatTemperatureUnit": "C"
},
"deviceInfo": {
"manufacturer": "COMAP",
"model": "Thermostat",
"swVersion": "1143"
}
}
]
}
}
The query and sync payload provided by you looks good. We have updated the thermostat implementation throughout the last year and it works much better with floating points. If this issue still persists, I recommend creating a ticket on the public issue tracker.
There can also be one more reason as to why you have been encountering this issue. It might be also due to the local language, there can be a possibility that the Google Assistant hasn’t still incorporated the full compatibility for this particular language. Our NLU team is constantly working on incorporating these language capabilities into the system.

How does Action on Google Smart Home API works Asynchronously?

I recently developed a project on Amazon Alexa for Smart Home Skill API and We developed using Async Method. In Alexa there are Event Gateway to make post call Asynchronously and deffered response to keep event gateway open. I know that Action on Google has homegraph. Working of HomeGraph and EventGateway is it the same I was wondering?
I was also wondering how can I make the Execution Asynchronous for the Action on Google?
According to my understanding I'll be requiring to make a post call to Homegraph for that purpose.
Yes, you can make a POST to the home graph once the state is completely changed.
For certain types of devices, which may take a while to complete, you can return an execute response with a PENDING:
{
"requestId": "ff36a3cc-ec34-11e6-b1a0-64510650abcf",
"payload": {
"commands": [{
"ids": ["123"],
"status": "PENDING",
"states": {
"on": false,
"online": true
}
}]
}
}
Later, once the status is correct, you can use the Report State API:
{
"requestId": "ff36a3cc-ec34-11e6-b1a0-64510650abcf",
"agentUserId": "1234",
"payload": {
"devices": {
"states": {
"123": {
"on": true
},
}
}
}
}

Response from dialogflow does not come to Google Assistant

Sometimes Google Assistant does not answer me even though I receive correct response from the fulfillment. That happens only when I use voice command, by using keyboard it always works fine.
What I receive instead of the response
It's just 'thinking'.
After using conv.close('You've punched-in into demo as Jack'); in DialogFlow history I can see following response:
{
"queryText": "Jack",
"fulfillmentMessages": [
{
"text": {
"text": [
"[{\"type\":0,\"speech\":\"\"}]"
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You've punched-in into demo as Jack"
}
}
]
},
"expectUserResponse": false
}
},
"outputContexts": [
...
],
"intent": {
"id": "96f93154-0ae4-4bb4-91c3-c1b796d7cda3",
"displayName": "punch-in"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
Does any one experienced such issue?
Noticed on Galaxy S7, Android 6.0.1.
actions-on-google v.2.2.0
That mostly happens to me when the internet connection is not good. With voice, there is an extra layer of Voice to Text conversion. Same latency issue might be causing the issue in your case.
The google assistant team resolved the issues I created to them, and after that the issues is not reproduced.

Continue a conversation after Browsing carousel in DialogFlow/Actions

So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.

Amazon Alexa Music Routine

I just picked up an Amazon Echo Dot for christmas, and I wanted to be able to start a routine on christmas morning, where it would play christmas music and turn on the lights, currently you can only make routines in the app like turn on smart home devices, say the weather and stuff like that, but it wont let you play music, or a playlist. Im not much of a coder, but i looked into the amazon developer site that lets you create skills for the echo dot, and i just cant figure out how to make this work.
heres the code I have so far
{
"intents": [
{
"intent": "AMAZON.HelpIntent"
},
{
"intent": "AMAZON.StopIntent"
},
{
"intent": "AMAZON.PauseIntent"
},
{
"intent": "AMAZON.ResumeIntent"
},
{
"intent": "Start",
"slots": [
{
"name": "Play",
"type": "AudioPlayer.Play",
"audioItem":{
"stream": {
"url": "https://music.amazon.com/albums/B002R4OU2Q?do=play&ref=dm_ws_dp_ald_bb_phfa_xx_xx",
"token": "string",
"expectedPreviousToken": "string",
"offsetInMilliseconds": 0
}
}
}
]
},
{
"intent": "Christmas"
},
{
"intent": "Begin"
}
]
}
basically, I say Start Christmas Morning, the playlist plays, and the lights turn on.
whenever I try to save this I get this Error: There was a problem with your request: Unknown slot type 'AudioPlayer.Play' for slot 'Play'
but audioplayer.play does exist right here https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#config
anyone know how to get this to work? or if there is an easier way for what im trying to do?
I'm not a programmer at all, but it appears you're missing one line. Go back to example you used and look for
"shouldEndSession": true
Without using this it appears that the echo will just wait for user response without playing anything.