Main differences between a Smart Home action and a DialogFlow action - actions-on-google

I'm just trying to figure out the main differences between these two types of actions. I mean a action that's use DialogFlow seems to be more conversational and more customizable. How does the Smart Home action handle the conversation? Is that a standard conversation based on the target device type?
When to create a Smart Home action and when to use DialogFlow?

To understand the difference, you need to understand the difference between a smart home action and a conversational action.
Conversational Actions
This is where the user initiates a conversation with "talk to X". Your action gets a WELCOME event. Then the user says more things and your action needs to process the user query and provide a text-based response.
Smart Home Actions
With the smart home integration, the user just gives a command directly. "Turn on my lights," for example, without precluding that with a "talk to" statement. Another big difference is that Google processes the user's query directly. Your smart home action does not get the user's text. Instead, there's a JSON request that specifies the user's intent.
The text that comes back is generated from Google as well, with parameters from your integration. Saying "turn on my lights" will result in "Ok, turning on lamp" or "Sorry, your lamp is offline" depending on what response your fulfillment sends.
There are a number of device types supported out of the box, as well as many traits. Traits specify the types of things that a device can do, such as turning on/off, or changing color. The traits are not explicitly tied to type, eg. you could change the color of a vacuum.
When to use each
If you're building or integrating a device that is meant to work with the Google Assistant, I'd suggest you look at smart home first. It will give users a better experience in being able to directly send queries, and it will make it easier for you to build fulfillment as the requests are structured already.
However, if it will not work for your application, you would want to look at Dialogflow, which will give you a greater level of flexibility in what the user can say.

Related

Skill closing and google opens recipe

We are developing interactive audiobooks for voice and have problems with some of our continuations with google assistant.
Example: In our story "Das tapfere Schneiderlein", the user hast to decide if he wants "Pflaumenmus" (plum butter) or "Apfelmus" (apple puree).
In the Test-console, everything works fine, both answers lead to the correct audio.
BUT with Google Assistant on Mobile device, only Pflaumenmus works. If I answer "Apfelmus", the action leaves conversations and opens Apple puree recipes with Google search. (see example image below, it's German, but still understandable I guess)
As we can never now, what our customers might answer, how can we prevent this from happening? (We are using Actions Builder.)
Example
This might be a result of an update regarding the Google Assistant Actions fallback intent behavior change that we announced on October 15/2020.
Follow the message from Google to make it work as you expect:
In order to provide a better experience, we now allow users to ask for some Assistant features, such as the weather or time, from within your Action. To perform this function, the Assistant detects if your Action matched a user's query with a fallback intent or NO_MATCH intent. If that is the case, and an appropriate response is available, Assistant responds to the user's request. If no response is available, or Assistant doesn't understand the query, the conversation continues within your Action.
As of October 15, 2020, this new behavior applies only if the fallback does not use a webhook. Starting January 15th 2021, we'll start enabling this feature for any Dialogflow fallback intent or Actions Builder NO_MATCH intent whether or not they use a webhook.
This change should not impact the operation of your Actions, unless you are using fallbacks as a way to collect input from your users. Going forward, you should only use fallback intents or NO_MATCH intents as a way to reprompt the user in the context of your Action. If you want your Actions to attempt to capture data from a wider range of user responses, create an intent that uses a Free form text type if you use Actions Builder. If you use Dialogflow, add an intent with a #sys.any type as the training phrase.

How do I create a Google Action intent that redirects Google Home to a radio station?

I work for a tv/radio broadcasting company, and we stream live content through various devices through a web-based API, and we also stream through internet radio (such as iHeartRadio, Tunein, etc.). The API can also return things like show titles and descriptions.
I've been tasked with creating a Google Action that can be used to retrieve information from the API such as what's playing, what's coming up next, what shows are available, etc. It would be fantastic if Google Actions supported live-streamed content, but I believe they do not.
Since we DO stream through internet radio, I would like to create an intent that allows the user to be redirected from my action to the internet radio stream for our station. How would I go about doing that? I could simply tell the user to start a new conversation (e.g., "Say, OK Google, play 'My-Awesome-Radio'"), but it would be more user-friendly not to have to start a new conversation.

What is the access for a display name I'm given when invoking an action?

When creating my action I'm asked to use a display name for the invoking of the action on the assistant:
Display name is publicly displayed in the Actions directory. Users say or type the display name to begin interacting with your Actions. For example, if the display name is Dr. Music, users can say "Hey Google, Talk to Dr. Music", or type "Talk to Dr. Music" to invoke the Actions.
What I'm a little confused about is in order for the user to invoke my action do they have to say "Talk to xxx"? Or are they allowed to say what's is used for the 'display name'? I see some actions use a name or command and others use the "Talk to".
An example is if my display name is "food store" as an registered app or company can I have the user say "order 20 carrots from food store" or does it have to be "Talk to food store"?
All Actions will be able to be invoked using "Talk to display name". This is like going to the URL of a website directly. This is known as explicit invocation.
You can also make your Action explicitly invokable with additional parameters, so you can say something like "Ask display name to order 20 carrots". These are invocation phrases. This is like going to the URL of a website, but being able to directly type in a path on the site.
In some cases, if a user asks the Assistant a question, or asks it to do something, the Assistant may identify an Action that can do that. This is something like doing a Google search on a question, and Google either providing a link to a website or providing a portion of the site in the sidebar of the answers. This is known as implicit invocation, and while it may work, it is not guaranteed that it works for any specific invocation. In general, your action can suggest phrases that will work, and the more specific those phrases are, the better chance they have of being accepted.
You may also register your Action to handle specific built-in intents. These are phrases that have already been crafted to meet user requests, and that the Assistant will know to pass off to you, since you can handle the request. You can think of them as being similar to smart home requests for other activities. There are a limited number of these intents, but if you have an action that fits the category, it makes sense to support it.

Is it possible to send rich responses to google home app?

I developed a actions on google app which sends a rich response. Everything works fine in the Actions on Google simulator. Now I want to test it on my Google Home Mini but my rich responses are not told by the mini. I would like to ask if it is possible to send my rich response to the google home app? The home mini says something like "Ok, I found these hotels, look at the home app" and there are the rich responses?
You can't send users to the Home app, but you can direct them to the Assistant available through their phone. The process is roughly:
At some point in the conversation (decide what is best for you, but when you have results that require display is usually good, or if the user says something like "Show me" or "Send this to my phone"), determine if they are on a device with a screen or not. You do this by using the app.getSurfaceCapabilities() method or by looking at the JSON in the originalRequest.data.surface.capabilities property. If they're using a screen, you're all set. But if not...
Make sure they have a screen they can use. You'll do this by checking out the results from app.getAvailableSurfaces() or looking at the JSON in the (not fully documented) originalRequest.data.availableSurfaces array. If they don't have a screen, you'll need to figure out your best course of action. But if they do have a screen surface (such as their phone, currently) available...
You can request to transfer them to the new surface using the app.askForNewSurface() method, passing a message explaining why you want to do the switch, a message that will appear as a notification on the device, and what surface you need (the screen).
If the user approves, they'll get the notification on their mobile device (using that device's normal notification system). When they select the notification, the Assistant will open up and will send your Action an Event called actions_intent_NEW_SURFACE. You'll need to create an Intent that handles this Event and forwards it to your webhook.
Your webhook should confirm that it is on a useful surface, and then proceed with the conversation and send the results.
You can see more about handling different surfaces at https://developers.google.com/actions/assistant/surface-capabilities
Rich responses can appear on screen-only or audio and screen experiences.
They can contain the following components:
One or two simple responses (chat bubbles)
An optional basic card
Optional suggestion chips
An optional link-out chip
An option interface (list or carousel)
So you need to make sure that the text response is containing all the details for cases like voice only (e.g. Google home/mini/max).
However, if your users are using the assistant from a device with a screen, you can offer them a better experience with the rich responses (e.g. suggestion chips, links etc').

reverse geolocation with open graph action

I am developing an application which has an associated lat,long with the action in open graph. Presently, the api only allows for a place, so I have created my own location object which can be a property of the action.
The problem I perceive here is that this will not form part of the action in a very Facebook way.
I do not want to use Facebook places, nor do I want to have the use create a place when performing an action.
I don't need a lot of granularity, instead, just need "near $locality", such as a village name or national park if no residential addresses exist. This seems to be what happens with the Facebook messages.
Do we know a way of getting low fidelity data from Facebook or other (free) source which allow for locality information to be attached to an action?
Cheers