Amazon Alexa Music Routine - echo

I just picked up an Amazon Echo Dot for christmas, and I wanted to be able to start a routine on christmas morning, where it would play christmas music and turn on the lights, currently you can only make routines in the app like turn on smart home devices, say the weather and stuff like that, but it wont let you play music, or a playlist. Im not much of a coder, but i looked into the amazon developer site that lets you create skills for the echo dot, and i just cant figure out how to make this work.
heres the code I have so far
{
"intents": [
{
"intent": "AMAZON.HelpIntent"
},
{
"intent": "AMAZON.StopIntent"
},
{
"intent": "AMAZON.PauseIntent"
},
{
"intent": "AMAZON.ResumeIntent"
},
{
"intent": "Start",
"slots": [
{
"name": "Play",
"type": "AudioPlayer.Play",
"audioItem":{
"stream": {
"url": "https://music.amazon.com/albums/B002R4OU2Q?do=play&ref=dm_ws_dp_ald_bb_phfa_xx_xx",
"token": "string",
"expectedPreviousToken": "string",
"offsetInMilliseconds": 0
}
}
}
]
},
{
"intent": "Christmas"
},
{
"intent": "Begin"
}
]
}
basically, I say Start Christmas Morning, the playlist plays, and the lights turn on.
whenever I try to save this I get this Error: There was a problem with your request: Unknown slot type 'AudioPlayer.Play' for slot 'Play'
but audioplayer.play does exist right here https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html#config
anyone know how to get this to work? or if there is an easier way for what im trying to do?

I'm not a programmer at all, but it appears you're missing one line. Go back to example you used and look for
"shouldEndSession": true
Without using this it appears that the echo will just wait for user response without playing anything.

Related

TemperatureSetting "What's the temperature in my living room ?" sometimes answers weird stuff

I recently released a Smart Home action for interacting with our thermostats. This device type is Thermostat and the single implemented trait is TemperatureSetting.
When I ask my home assistant "What's the temperature in the living room ?", sometimes it answers good, sometimes it answers "Ok, this device has been updated" (this is a translation from my language so it may not be the accurate answer).
Checking the logs on my server, I can see the QUERY intent responses being made.
The following QUERY response provokes a "The temperature in your living room is 20.5 degrees" answer:
{
"aa0c2504f896": {
"status": "SUCCESS",
"online": true,
"thermostatMode": "heat",
"thermostatTemperatureAmbient": 20.7,
"thermostatTemperatureSetpoint": 20.5
}
}
This one provokes "Ok, this device has been updated" :
{
"aa0c2504f896": {
"status": "SUCCESS",
"online": true,
"thermostatMode": "heat",
"thermostatTemperatureAmbient": 20.7,
"thermostatTemperatureSetpoint": 19.5
}
}
I can reproduce it by changing the setpoint 1° lower, as shown by the previous logs. When doing so, the EXECUTE intent sends a ReportState request, so the HomeGraph API is updated synchronously.
Am I doing something wrong ?
Edit:
It happens every time the setpoint and the ambient temperature are different (in a 0.5 degrees interval). The Google Home UI always renders it correctly, it is the voice control that answers something wrong.
Here is also my SYNC payload :
{
"requestId": "5878230358273544341",
"payload": {
"agentUserId": "3101066d-b012-4780-8d77-7297aaea4e37",
"devices": [
{
"id": "aa0c2504f896",
"type": "action.devices.types.THERMOSTAT",
"traits": [
"action.devices.traits.TemperatureSetting"
],
"name": {
"defaultNames": [
"Thermostat COMAP"
],
"name": "Thermostat salon"
},
"willReportState": true,
"attributes": {
"availableThermostatModes": [
"heat",
"on"
],
"thermostatTemperatureRange": {
"minThresholdCelsius": 4,
"maxThresholdCelsius": 30
},
"thermostatTemperatureUnit": "C"
},
"deviceInfo": {
"manufacturer": "COMAP",
"model": "Thermostat",
"swVersion": "1143"
}
}
]
}
}
The query and sync payload provided by you looks good. We have updated the thermostat implementation throughout the last year and it works much better with floating points. If this issue still persists, I recommend creating a ticket on the public issue tracker.
There can also be one more reason as to why you have been encountering this issue. It might be also due to the local language, there can be a possibility that the Google Assistant hasn’t still incorporated the full compatibility for this particular language. Our NLU team is constantly working on incorporating these language capabilities into the system.

Microphone closing on Nest Hub - other devices work

We use the Conversational Actions SDK and our action (an interactive audio book) works in the Actions console and on the iPhone Google Assistant app without any problem.
However, on Nest Hub devices its behavior is completely different: it takes very long before it starts playing the SSML audio and after nearly every answer the microphone is closed, so that you have to say "OK, Google" again. That really kills the flow of the game. As everything works fine in the console we have a very hard time debugging this issue.
That is an example for a response that we send to the webhook request:
{
"user": {
"params": {
"id": "google-d1d76b00-e220-11ea-bf59-123456789"
}
},
"scene": {
"next": {
"name": "GameFlow"
},
"slots": {
"GameFlowResponse": {
"mode": "REQUIRED",
"status": "SLOT_UNSPECIFIED"
}
}
},
"prompt": {
"firstSimple": {
"text": "Some text to be displayed",
"speech": "<speak><audio src=\"https://some.audio.url\">Some text</audio><break time=\"500ms\"/><audio src=\"https://another.audio.url\">some text</audio></speak>"
}
}
}
Did maybe anyone experience something similar to that? As I said, I am running out of knowledge how to debug this.

Continue a conversation after Browsing carousel in DialogFlow/Actions

So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Wit.ai node package buggy

I was working with wit.ai today. I was using the node-wit module. But the responses I was acting were very weird.
When I used the node-wit module. I got the response as -
{
"msg_id": "0f4rOWRXQMIhVuf5i",
"_text": "what is your name",
"entities": {
"intent": [{
"confidence": 0.9425254893432,
"value": "get_name"
}]
}
}
Whereas when i used the cURL command to get the response the response was very different.
{
"msg_id": "0KJdIPedYbYwWOgOL",
"_text": "what do you do",
"entities": {
"intent": [{
"confidence": 0.97713342030998,
"value": "get_job"
}]
}
}
Can anyone tell why this is happening or if I am implementing the function wrong?
Are you using the same service, there are two services, one for conversations/stories and other for messages.
if that is not the reason try to check the context and take a look on the log section inside wit.ai web app