I am new to google home. I had an experience in Amazon Alexa custom skill development. In Alexa, I had deployed my codebase in lambda function and also test my custom skill using actual Alexa device register with my email id.
So, Now I need to develop similar skill in google home device. Till now, I didn't get any good tutorials.
Is it possible to create & test Google Home app like Alexa skill?
The steps you go through to develop a Google Home action or app are very similar to creating an Alexa skill. There are a couple of differences, but logically they are the same.
If you use a NLP system such as Dialogflow (which is strongly suggested), you build the suggested phrases that the system responds to and the Intents they correspond to. You would specify your webhook as part of building these phrases. If you don't wish to use an NLP, you can specify the initial Intent phrases using the Actions SDK and specifying the configuration as part of a .json file. Other actions you'd do in the Alexa console are similar to what you'd do in the Assistant console.
You can deploy your Action on any public server that accepts HTTPS connections. This can include AWS Lambda with an AWS API Gateway trigger, or a Firebase Function, or a web server you more directly control that has a valid SSL certificate. This webhook would get a JSON body and needs to send back a valid JSON response. Google has libraries for node.js to help with this.
Google has a relatively full-featured simulator which you can use to test your Action. Once it is available in the simulator, it is also immediately available on every device attached to that account. You can permit other accounts to the project as well and, once they have activated it through the simulator, it is available on all their devices as well.
A full set of documentation is available at https://developers.google.com/actions/. It includes links to sample code, and you can find more step-by-step codelabs at https://codelabs.developers.google.com/?cat=Assistant
If you're familiar with how to develop skills for Alexa you might want to check out the jovo-framework. It makes it pretty easy to create skills that work for both Amazon Alexa and Google Home.
Here is a good starter template and walk-through that will get you going. https://github.com/rmtuckerphx/ask-cli-jovo-starter
Related
I am currently developing an updated version of a voice app that it was previously developed with Google Actions console as a conversational app. However, when started to create a new project for this updated app, a banner says that Conversational apps will be sunsetted on June 13th.
Reading through the documentation, it is not clear to me how to develop now that this options will not be available in the future. Among the options, App Actions and Dialogflow CX might be the route for development. However, my app to be developed reqquires integration with smart devices such as Google nest Hub and Google nest mini for interaction.
It seems that App Actions might not be the solution because it adds voice capabilities to an existing Android App, which I am not sure that will work with Smart devices directly.
On the other hand, dialogflow CX looks to be focused on chatbox, i.e. based on text. Again, I am not sure if using Dialogflow CX will be able to provide an appp that uses voice interactions in smart devices.
In addition to these options, I also read about Dialogflow ES, Cloud-to-Cloud for smart for Google Home, Content Actions, and Media Actions. It does not look like either of these solutions might be a replacement to Google Actions. For example, I think smart home is not the option because I am not looking to interact with home devices besides google nest hub.
I hope anybody can help and help me understand development path or lead me to information that I might be missing. Thank you
It sounds like you are looking to create a new version of a conversational app that was previously developed using the Google Actions console, but are now facing the sunset of the conversational apps feature on June 13th.
One option that you may want to consider is Dialogflow CX, which is a platform for building and managing natural language conversational experiences. Dialogflow CX is built on top of Dialogflow ES, and is designed for more complex and large-scale conversational apps.
It allows you to create a conversation flow using a visual editor, and supports integration with various platforms including Google Assistant, Alexa, and WhatsApp.
Regarding the integration with smart devices like Google Nest Hub and Google Nest Mini, you can use the Google Assistant Actions API to build custom actions that can be invoked by users through the Google Assistant on these devices. This API allows you to define a conversation flow and handle user inputs, and it can be integrated with Dialogflow CX to handle natural language understanding and generation.
App Actions, on the other hand, are a way to surface your app's functionality through the Google Assistant on Android devices, and it's not a replacement for Google Actions.
Cloud-to-Cloud for smart home, Content Actions, and Media Actions are more specific solutions for different use cases, for example, providing actions for smart home devices, providing actions for media content and etc.
In summary, Dialogflow CX along with the Google Assistant Actions API seems to be the most relevant solution for your use case. It allows you to build natural language conversational experiences that can be integrated with smart devices like Google Nest Hub and Google Nest Mini.
It's worth checking the official documentation and tutorials for more information about Dialogflow CX, the Google Assistant Actions API, and how to integrate them. Also, you may want to reach out to Google support team for more specific guidance for your case.
We've been having trouble keeping Wonder available on Google Assistant and Home. We keep resubmitting and then find out there's some bug that gets us taken down.
One thing that has made it hard is that we cannot test Wonder in the simulator in the Actions Console. Here is what a session looks like: Wonder test session
When I expand the last message I see the following error: Cannot use standard Google Assistant features in the Simulator
Is there any way you could help us get this fixed?
The Google Assistant available in the simulator has a subset of the features of the assistant available in your phone or Google Home device. If you want to use the entire set of voice interactions, please use the assistant on your physical devices. If you want to test your smart home action, you can use Test Suite.
Also note that Google Home projects do not support dialogflow anymore. I see you have the dialogflow-es-fulfillment tag in your project. If your project is set up as a dialogflow project, you might need to set up a new Smart Home project on the actions on google console.
I'm a developer trying to learn to interact apps with Google Assistant.
I noticed that as a developer we can use app actions (action.xml or shortcut.xml) to define how we want the google assistant to communicate with the app. Besides, there are conversational actions that can also do a similar job.
I wonder which one is preferred by Google and what are the differences in between. Are the apps developed by Google using conversational actions or app actions? and finally how can I tell if an app is using either of them or both?
Broadly speaking, app actions are a way to launch an Android app (possibly into a specific Android Intent and with specific information) from the Google Assistant running on Android, while conversational actions are a way to interact with a webhook-based app through the Google Assistant, typically over multiple turns.
While the two are similar, in that they both work through the Assistant, they are rather different.
App Actions
Only work on Android devices, not everywhere that the Assistant runs.
Are generally used to launch an already installed app, possibly providing specific information to a deep link in that app.
Can also be used (in some cases, with features that are coming soon) to provide widgets from the app into the Assistant.
Once the app is launched, you are (usually) no longer interacting with the Assistant - you're in the app, and have the UI from that app, which is generally not voice driven
Conversational Actions
Work across all platforms where the Assistant runs - from Smart Speakers to Android devices
Do not require an app to be "installed" - you can invoke it by name just like you open a web page
Primarily uses voice interaction for all of the work - there does not need to be a visual component.
Code runs "in the cloud", not on the device, which acts more like a web browser.
Google doesn't "prefer" either, and they develop both types. (For example, anything that works with a smart speaker is a conversational action, while apps like Google Maps include app action support). It depends on your use case and what you already have available:
If you have an existing Android app, then app actions may be a reasonable approach.
If you are starting from scratch, then you may want to look at conversational actions.
I am currently working on a IoT device to control lights. This device is implemented using FreeRTOS.
I am little confused how to provide Google Home integration with this device, could someone shed some light on this.
You can use the Smart Home API. The Google Assistant works with a webhook, sending commands to SYNC, QUERY, and EXECUTE on that URL. You will then need to send these commands to your device.
Setup happens through the Google Assistant app, where users must link with your OAuth server.
Here is a sample project for Smart Home, using virtual devices.
After creating private API.AI agent, I can test it using the simulator. But is it possible to test it in Android phones ?
Not yet. But you can use the "Agent Page" under Integration in API.AI to give you a web view you can use from the phone. Easy to share too!
As given in https://docs.api.ai/docs/sdks API.AI has provided support for many platforms and using it we can access our private agent.
i.e as given in https://github.com/api-ai/api-ai-android-sdk#getting_started , I created android app by providing access_token detail of my agent and now I can pass the request via my android phone; similarly i used node.js and can make request to my API.AI agent from Linux PC
You can do this now that Google has pushed the Assistant to the Google Pixel, Nexus 6p, and Nexus 5x. No other phone currently can use the assistant.
Edit 4/17/17: Upon successfully deploying my own agent I was expecting to be able to use it on my phone after being able to test and preview it on my phone. However it was not working so I contacted their support and got the following answer:
Sorry for the inconvenience, but Actions on Google are not currently
supported on any device other than Google home. The functionality you
are referring to was the result of a bug that allowed access across
multiple devices to a subset of users, that bug was patched and that
is why you lost the functionality. Google is planning to make Actions
on Google available across multiple devices, but as per policy, we
don't comment on timelines for releases in support.