Google keeps flipping between custom voice and google default voice - actions-on-google

We are using Google Action SDK in SSML to respond user in custom voice. But Google keeps flipping between custom voice and google default voice.

The switching between custom voice and google voice happens randomly. You are not able to consistently reproduce it. Below is an example of the response. The audio source url is the file location stored in Azure storage and it expires in a few mins after generated.
{
"expectUserResponse": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Hi! I can help you with so many things! Like getting you an auto quote or answering general insurance questions. So what can I do you for?",
"ssml": "<speak>\r\n <audio src=\"https://qa01storgapp01dcq.blob.core.windows.net/google-assistant-audio-permanent/LUISIntentsandGetQuoteEntities-9.mp3?sv=2019-07-07&spr=https&se=2020-08-18T19%3A30%3A28Z&sr=b&sp=r&sig=eszRZWEw6sc58xRyz1Y7PrnpP4gyxfDUm%2FGQn9QxElM%3D\">Hi! I can help you with so many things! Like getting you an auto quote or answering general insurance questions. So what can I do you for?</audio>\r\n</speak>",
"displayText": "Hi! I can help you with so many things! Like getting you an auto quote or answering general insurance questions. So what can I do you for?"
}
}
],
"suggestions": [
{
"title": "Get Quote"
},
{
"title": "Ask a Question"
}
]
}
}
}
],
"isInSandbox": false
}

Related

How to use roomHint and structureHint with smarthome actions on Google

We are currently setting un a Smarthome action, and we would like to provide roomHint on the first sync (not on request sync) as it's really tedious to set up rooms on the first sync, but it does not work.
We tried to name rooms in english and also in italian, (as it's not really clear from the documentation if there is a list on room names that we can use?) but no way.
So can you please give us a hint how to use the roomHint field?
Also in the API doc we've found structureHint, does it work? The documentation for SYNC intent does not mention this field.
Here is our SYNC intent with one device and room, we took office from the example JSON:
{
"requestId": "3582198904737125163",
"payload": {
"agentUserId": "xyz#qwertyz.com",
"devices": [
{
"id": "deviceID",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Lampadina",
"defaultNames": [
"Lampadina_XYZ"
],
"nicknames": [
"Lampadina"
]
},
"willReportState": false,
"customData": {
"modelType": "DEVICE"
},
"roomHint": "office"
}
]
}
}
Thanks
Unfortunately, I believe the structureHint is only in the HomeGraph API sync response.
It cannot be used in the Sync intent.
If someone can tell me I'm wrong and how to use it, you'd be a hero.

Response from dialogflow does not come to Google Assistant

Sometimes Google Assistant does not answer me even though I receive correct response from the fulfillment. That happens only when I use voice command, by using keyboard it always works fine.
What I receive instead of the response
It's just 'thinking'.
After using conv.close('You've punched-in into demo as Jack'); in DialogFlow history I can see following response:
{
"queryText": "Jack",
"fulfillmentMessages": [
{
"text": {
"text": [
"[{\"type\":0,\"speech\":\"\"}]"
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You've punched-in into demo as Jack"
}
}
]
},
"expectUserResponse": false
}
},
"outputContexts": [
...
],
"intent": {
"id": "96f93154-0ae4-4bb4-91c3-c1b796d7cda3",
"displayName": "punch-in"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
Does any one experienced such issue?
Noticed on Galaxy S7, Android 6.0.1.
actions-on-google v.2.2.0
That mostly happens to me when the internet connection is not good. With voice, there is an extra layer of Voice to Text conversion. Same latency issue might be causing the issue in your case.
The google assistant team resolved the issues I created to them, and after that the issues is not reproduced.

Continue a conversation after Browsing carousel in DialogFlow/Actions

So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Getting "Sorry, I didn't get any response." message when I tried Hand-off feature

When I tried Multi-surface conversations (Hand-off from Google Home to Android Google Assistant), I'm getting "Sorry, I didn't get any response."
I'm using Action SDK and locale is "ja".
Here is my response:
{
"conversationToken": "(token)",
"expectUserResponse": true,
"isInSandbox": true,
"expectedInputs": [
{
"possibleIntents": [
{
"intent": "actions.intent.NEW_SURFACE",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.NewSurfaceValueSpec",
"context": "Sure, I have some sample images for you.",
"notificationTitle": "Sample Images",
"capabilities": [
"actions.capability.SCREEN_OUTPUT"
]
}
}
],
"inputPrompt": {
"richInitialPrompt": {
"items": {
"simpleResponse": {
"textToSpeech": "PLACEHOLDER_FOR_NEW_SURFACE"
}
}
}
}
}
]
}
Does anyone know why?
Turns out that this is not a bug in a certain locale, but that askForNewSurface is currently supported for english locales only.
This is what I got from AoG support:
Hi Jan,
Thank you for your interest in Actions on Google.
askForNewSurface is indeed only available for English locales. We are in the process of changing the documentation to reflect those restrictions.
Sorry for the confusion.
We do not have any set det for the release of this feature in other locales.
Kind Regards,
Jean-Charles,
Actions on Google Support Team.
It seems to be available only in english but i find no clear statement about this on any documentation.
I tried the exact same code in english and in french, it works in english, not in french.