I'm building an action for AoG using Dialogflow, using node.js as the fulfillment webhook.
I use suggestions and carousels in my app and want to respond to when a user clicks. My current implementation uses dialoglow's fallback, and then I check the payload manually for rawInputs.inputType="TOUCH" for suggestions and intputs.intent=actions.intent.OPTION for carousels. I'm looking for a more elegant way to do so with dialogflow and the webhook.
Does anyone know if there's a way to either:
Detect a carousel selection event in Dialogflow that can later be used as intent
A built-in method in the node.js webhook to catch this event.
Suggestion chips cannot be detected through Dialogflow directly - they are treated just like the user said or typed the chip that was selected.
However, you can detect that a carousel option has been selected. You can't determine which carousel item is selected in Dialogflow, you need to do that in your fulfillment webhook.
You can create an Intent which does not have any training phrases set, but which will trigger on an Event named actions_intent_OPTION (based on the native Intent name, but with the periods replaced by an underscore).
Related
I have been trying to create a chat bot that helps users to interact with my website. I want Watson to be able to interact with my web-server to read, write or modify my database. In certain cases, Watson will require some very specific user inputs/Q&A which are too unordered to belong to any specific entity, for which I'm thinking an Action skill can help. After creating an Action skill that collects the desired user inputs, I'm now unable to find the option that let's you call an action skill from within a dialog node, or to callout for webhooks from an action skill.
Although some articles from the documentation says, it's possible to do so but I can't see from where. There used to be an option (Call an action skill) in the customize dialog box, here is a screenshot of the old customize dialog box, but this option is not there anymore. Is there any other way to achieve this?
IBM no longer provides the callout to actions skill from dialog skill. They posted alerts for existing users;-
Note: Once disabled, call out to an action skill will no longer be available.
We are developing interactive audiobooks for voice and have problems with some of our continuations with google assistant.
Example: In our story "Das tapfere Schneiderlein", the user hast to decide if he wants "Pflaumenmus" (plum butter) or "Apfelmus" (apple puree).
In the Test-console, everything works fine, both answers lead to the correct audio.
BUT with Google Assistant on Mobile device, only Pflaumenmus works. If I answer "Apfelmus", the action leaves conversations and opens Apple puree recipes with Google search. (see example image below, it's German, but still understandable I guess)
As we can never now, what our customers might answer, how can we prevent this from happening? (We are using Actions Builder.)
Example
This might be a result of an update regarding the Google Assistant Actions fallback intent behavior change that we announced on October 15/2020.
Follow the message from Google to make it work as you expect:
In order to provide a better experience, we now allow users to ask for some Assistant features, such as the weather or time, from within your Action. To perform this function, the Assistant detects if your Action matched a user's query with a fallback intent or NO_MATCH intent. If that is the case, and an appropriate response is available, Assistant responds to the user's request. If no response is available, or Assistant doesn't understand the query, the conversation continues within your Action.
As of October 15, 2020, this new behavior applies only if the fallback does not use a webhook. Starting January 15th 2021, we'll start enabling this feature for any Dialogflow fallback intent or Actions Builder NO_MATCH intent whether or not they use a webhook.
This change should not impact the operation of your Actions, unless you are using fallbacks as a way to collect input from your users. Going forward, you should only use fallback intents or NO_MATCH intents as a way to reprompt the user in the context of your Action. If you want your Actions to attempt to capture data from a wider range of user responses, create an intent that uses a Free form text type if you use Actions Builder. If you use Dialogflow, add an intent with a #sys.any type as the training phrase.
I work for a tv/radio broadcasting company, and we stream live content through various devices through a web-based API, and we also stream through internet radio (such as iHeartRadio, Tunein, etc.). The API can also return things like show titles and descriptions.
I've been tasked with creating a Google Action that can be used to retrieve information from the API such as what's playing, what's coming up next, what shows are available, etc. It would be fantastic if Google Actions supported live-streamed content, but I believe they do not.
Since we DO stream through internet radio, I would like to create an intent that allows the user to be redirected from my action to the internet radio stream for our station. How would I go about doing that? I could simply tell the user to start a new conversation (e.g., "Say, OK Google, play 'My-Awesome-Radio'"), but it would be more user-friendly not to have to start a new conversation.
I have a bit of a silly question about the Slack integration of Dialogflow.
When I use a card to the response in Slack, the buttons work perfectly. But they don't type the title nor the postback of the button to the chat. The Botframework from Microsoft works like this and I think it's easier for the user to see what he answered if he scroll up the conversation history.
My question is, is this how it is, or can I change this behavior so that on clicking a card button the text of the button is typed in the chat?
For the moment, I'm only using code to execute functions when needed.
This is the "card" I'm talking about:
You can't directly change the behavior of the card as this is internal to Slack and/or Dialogflow. There is however a workaround: If you are using a Webhook for fulfillment you can send multiple responses messages back and thus simply include the button text before the actual card. The button text will then only show up after the request is fulfilled (not immediately after the user tapped it), but that seems about the best you can get.
I'm just trying to figure out the main differences between these two types of actions. I mean a action that's use DialogFlow seems to be more conversational and more customizable. How does the Smart Home action handle the conversation? Is that a standard conversation based on the target device type?
When to create a Smart Home action and when to use DialogFlow?
To understand the difference, you need to understand the difference between a smart home action and a conversational action.
Conversational Actions
This is where the user initiates a conversation with "talk to X". Your action gets a WELCOME event. Then the user says more things and your action needs to process the user query and provide a text-based response.
Smart Home Actions
With the smart home integration, the user just gives a command directly. "Turn on my lights," for example, without precluding that with a "talk to" statement. Another big difference is that Google processes the user's query directly. Your smart home action does not get the user's text. Instead, there's a JSON request that specifies the user's intent.
The text that comes back is generated from Google as well, with parameters from your integration. Saying "turn on my lights" will result in "Ok, turning on lamp" or "Sorry, your lamp is offline" depending on what response your fulfillment sends.
There are a number of device types supported out of the box, as well as many traits. Traits specify the types of things that a device can do, such as turning on/off, or changing color. The traits are not explicitly tied to type, eg. you could change the color of a vacuum.
When to use each
If you're building or integrating a device that is meant to work with the Google Assistant, I'd suggest you look at smart home first. It will give users a better experience in being able to directly send queries, and it will make it easier for you to build fulfillment as the requests are structured already.
However, if it will not work for your application, you would want to look at Dialogflow, which will give you a greater level of flexibility in what the user can say.