I successfully created a basic application which turns the lights in a certain room on or off. This uses the built-in intent 'turn everything in living room on/off'.
Is it possible to implement custom intents? Let's say I want to implement the intent 'put the lights in party mode', an intent which is not covered by the built-in intents. How can I do this? Can I route Google Smart Home services to Dialogflow intents?
Custom intents as a concept are not supported in the smart home integration for the Google Assistant. However, if you want to have specific feature sets like "party mode", you can create a "SCENE" device in your SYNC response that has the Scene trait. When the user says something like "activate party mode", your cloud integration will get an EXECUTE intent for that scene, which you can handle in any way.
Related
I want my smart device to be controlled by Google assistant. I have checked the smart home action guide, but the required device type is not available in smart home action.
I need to add customization to my voice commands as well, so smart home action is not option for my requiremnts.
Is conversational action a good choice to implement smart home. The google action doc mentions the use cases for which conversational action can be used.
Smart home use cases seem to be not fitting in the above mentioned use cases, as it will take time to process the request.
Is there any other option to implement this ?
While selecting a device type to integrate with google, the device type trait provides many options for combinations for you to pick the closest device type you want with all the required traits.
NOTE - Conversational actions are currently deprecated and will be fully shut down on June 13, 2023. For more information, visit https://developers.google.com/assistant/ca-sunset.
all:
We want to enable Google Assistant with custom actions via the button, not the voice input (keyword).
For example, usually, we enable Google Assistant with word "Hello, Google, show me the weather.". But within our production, we want to press one specific button, and then it could send the sentence above out to Google Assistant directly.
But we can't find any APIs to support this requirement. And we heard that Google plan to support hard-key method since Samsung make good experiences on S8
Do anyone help us to fix this gap?
Thank you!
You could use an Action link without any additional parameters specified to trigger the MAIN intent, or specify the custom intent you'd like to trigger.
Is there a way i can use Google's Navigation Card, the one shown when you ask something like "Navigate to home", in my agent ? I can't find any tutorials related to this.
I was trying to open Maps application, but this (Google's Navigation Card) is more suitable to my requirement, as if the user doesn't want to go to the link.
Actions on Google (the Google Assistnat 3rd party platform) doesn't support navigation cards for 3rd party apps yet. You can emulate a navigation card by:
Creating a basic card
Make the image of the basic card a screenshot of the directions
Make the button of the basic card a deep link to Google maps for the desired directions
No, the Actions on Google platform is separate from whatever capabilities are on the underlying OS platform.
You may wish to look into using the Static Maps API to provide the image and path as an overview on a card, and then include a link out to the app if the user wants to do navigation.
On amazon alexa, cards are displayed in the amazon alexa app or on the screen of an echo show ?). If I call my google actions on my smartphone, I am also able to view the cards. But what happens if I use a different non-screen surface, like the google home? Do the cards appear in the google home app anywhere or do they just get lost?
Cards (and other visual elements you can add) aren't shown if the surface you're currently interacting with doesn't support them. This is intentional since the user may not expect them there and might open the app later and be surprised.
You can always check what surfaces are being supported in your current conversation by using app.getAvailableSurfaces() or the equivalent JSON properties. If you need to show the user something, you can prompt them to change to a surface that supports display by using app.askForNewSurface(). See the documentation about Surface Capabilities for detailed information.
In general, it is a good design to expect the user to only interact with their voice and to require visual information only minimally. Visual information should be used to supplement and enhance the voice as much as possible.
I’ve already built an Alexa skill, and now I want to make that available on Google Home. Do I have to start from scratch or can I reuse its code for Actions on Google?
Google Assistant works similar to Amazon Alexa, although there are a few differences.
For example, you don't create your language model inside the "Actions on Google" console. Most Google Action developers use DialogFlow (formerly API.AI), which is owned by Google and offers a deep integration. DialogFlow offered an import feature for Alexa Interaction models, which doesn't work anymore. Instead, you can take a look at this tutorial: Turn an Alexa Interaction Model into a Dialogflow Agent.
Although most of the work for developing voice apps is parsing JSON requests and returning JSON responses, the Actions on Google SDK works different compared to the Alexa SDK for Node.js.
To help people build cross-platform voice apps with only one code base, we developed Jovo, an open-source framework that is a little close to the Alexa SDK compare to Google Assistant. So if you consider porting your code over, take a look, I'm happy to help! You can find the repository here: https://github.com/jovotech/jovo-framework-nodejs
It is possible to manually convert your Alexa skill to work as an Assistant Action. Both a skill and an action have similar life cycles that involve accepting incoming HTTP requests and then responding with JSON payloads. The skill’s utterances and intents can be converted to an Action Package if you use the Actions SDK or can be configured in the API.ai web GUI. The skill’s handler function can be modified to use the Actions incoming JSON request format and create the expected Actions JSON response format. You should be able to reuse most of your skill’s logic.
This can be done but it will require some work and you will not have to rewrite all of your code.
Check out this video on developing a Google Home Action using API.AI (that is recommended).
Once you have done the basics and started understanding how Google Home Actions differ from Amazon Alexa Skills, you can simply transfer your logic over to be similar. The idea of intents are very similar but they have different intricacies that you must learn.
When you execute an intent it seems as if your app logic will be similar in most cases. It is just the setup, deploying and running that are different.