Using keyword ASK for Google Assistant - actions-on-google

Is it possible to use the keyword "ask" with Actions-sdk? (not dialogflow)
Currently, I can start the process with "Talk to My App", which connects to my webservice. I then want to be able to say something like "ask John what room he is in"
What seems to happen is that Assistant (on Android) drops the "ask" and simply sends "John what room he is in" to the webhook api.
I guess that GA is listening for "ask" as an trigger word, but does that mean we cannot use it? What other words might be impacted? I've read lots of docs and don't see this clarified anywhere.
In case it is relevant, our service is trying to accept all spoken text for processing in our NLU, as the conversation paths are deep and complex, with many industry specific phrases and terms.
EDIT Retested this again several days later and it is now sending the ASK keyword from a phone. Confused.
Not sure what SO practice is, delete question or keep it

I just used the simulator to run a test using what I think is your test case:
I said "talk to 'my invocation name goes here' in the simulator. My Action was launched and its "main" aka "welcome" aka "default" intent was dispatched.
Then is said "ask 'invocation name' what time is it". The text intent was fired and the whole query beginning with ask which IME is not special-cased was returned.
Does that help?

Related

tabbarview with children tab with many tabs becomes slow to load in home page leads to black screen

body: TabBarView(
children: [
Home_G_HX(),
Physical(),
Text("1"),
],
),
Home_G_HX and Physical have many widgets so home screen loading becomes slow
The most effective method to Create a Chatbot Using DialogFlow ?
What is DialogFlow ?
DialogFlow is an improvement stage made by Google that can assist us with making Chatbots. It depends on NLP (Natural Language Processing) which offers our chatbots the likelihood to be extremely strong.
What is the ChatBot ?
A chatBot is a keen program that can cooperate with individuals like a human and serves them in the particular space where it has been made. The chatbot examines the expectation of the client and investigates the reaction that will be more adjusted.
Presently you understand what DialogFlow and chatbot are, we should perceive how we can make a chatbot utilizing Dialogflow.
Note: You ought to have a google account and login in to the Dialogflow stage prior to following these means.
In this article, we will make a chatbot that can serve clients who will believe should do a booking for a room in a Hotel.
Stage 1. Make an Agent
An Agent is a smart program inside the chatbot, that program interfaces with the clients or clients.
To make an Agent, go to the left segment of your screen and snap on the main button underneath the Dialogflow logo and go down to the make new specialist button.
From that point onward, the new screen will be stacked, and you will be request to indicate the name of the Agent, the language that it ought to be talk and the time region. As far as I might be concerned, I type reservation-bot for the name and the rest, I leave the default values. From that point onward, you should tap on the CREATE button and DialogFlow will make a specialist for your chatbot.
Stage 2. Make plans
Purposes is use by the chatbot to comprehend what the clients or clients need. It's inside the goals that we ought to give to the chatbot the instances of expressions that the clients might ask and a few reactions that the chatbot ought to use to pay all due respects to the clients. We should demonstrate the way that we can make it happen.
Note: When we make another specialist, it accompanies two defaults aims named Default Fallback Intent and Default Welcome Intent
For make another Intent, click on the Create Intent button
From that point onward, you should give the name of your goal. Then, at that point, go to the Training Phrases segment and snap on add preparing phrases. This part concerns the way where you ought to give the case of the expression which addresses the various inquiries that clients might pose to the chatbot. we prescribe giving numerous guides to make your chatbot extremely strong.
For this model, you could accept similar expressions as me.
We have added a few expressions that clients might ask to our chatbot, for your own chatbot, go ahead and add one more expression to work on the force of your chatbot
In this picture, we can see that two articulations are overlined. DialogFlow has recognized these articulations as a substance, truth be told. DialogFlow perceives three kinds of elements like frameworks substances, engineer elements, and meeting substances. this evening and today are perceived as frameworks elements, it alludes to the date or timeframe, this kind of substance is now set in Dialogflow. Later we will make our own elements which will perceive by DialogFlow as Developer substances. For more data, look at this documentation
Presently, we should characterize a Responses that the specialist might use to pay all due respects to clients. Go down to the Response segment and snap on the Add reaction button, and add a few reactions explanations.
Moving Bot Articles:
How Conversational AI can Automate Customer Service
Robotized versus Live Chats: What will the Future of Customer Service Look Like?
Chatbots As Medical Assistants In COVID-19 Pandemic
Chatbot Vs. Clever Virtual Assistant — What's the distinction and Why Care?
You can see that inside these reactions models there are a few articulations that beginning with the $ image, these articulations are considered as factors that will contain the qualities that clients will make reference to in their inquiries, and that DialogFlow will have perceived as a specific substance. On the picture above, we have three factors, for example, $time-period, $date-time, and $reservation-type. $time-period and $date-time are frameworks substances factors and $reservation-type is a Developer element variable, and that implies $reservation-type ought to be made by the engineer, before that DialogFLow might remember it. After added a few reactions that the specialist ought to utilize, click on the Save button, we will return from this point forward.
Stage 3. Making of substances
In all actuality, substances are catchphrases that assist the Agent with perceiving what the client needs. To make it, simply follow me.
Click on the Entities button
production of substances
After click on the Create Entity button
formation of element
Later, determine the name of the element (you ought to give reservation-type as name of your substance, since you have use it as factor when you gave a few reactions to the specialist). Then, add a substance bed-room and a few equivalents like beneath.
try to check the case Define equivalent words previously, and afterward click on Save button.
The job of equivalent words is that, when clients ought to discuss bed-room, bed or room, all of this ought to allude to the bed-room.
Do likewise with the element reservation-activity and save it.
formation of reservation-activity substance
Presently, we have two elements fit to be utilized.
Stage 4. add our substances inside preparing phrases articulations
back to the booking aim interface and go to the preparation phrases segment.
At the point when you are there, select an articulation, and inside this articulation select the word bed-room like this
Then, at that point, research for #reservation-type
What's more, click on this, and the shade of bed-room will change.
Do exactly the same thing to all the bed-room inside all articulations.
For the words booking, reservation, and save, do exactly the same things however rather than research #reservation-type you will explore #reservation-activity.
adding developper elements inside our preparation expression articulations
Stage 5. Meaning of boundaries and activities
It's not needed, yet now and again, it will be vital to commit the client to provide for the chatbot, some data.
Go down to the Actions and boundaries segment, consistently inside the booking expectation interface. you ought to have this picture underneath.
activity and boundaries
For our chatbot, we need that clients give the booking type and the date of the reservation. Make a point to really look at it.
activities and boundaries
From that point forward, we ought to indicate the brief text that the Agent ought to show to the client when they haven't determined the necessary boundaries. You want to tap on the Define prompts… space on the ideal locations of this segment, in the wake of characterizing brief text, close the container discourse.
for the date-time boundary
characterize brief text for date time boundary
for the booking type boundary
After this, save the plan.
Presently you can test your chatbot.
test segment
You can test your chatbot here.
Stage 6. Coordination on the web stage
reconciliation
Click on the reconciliations button
You can incorporate your chatbot within numerous stages, as Facebook courier, WhatsApp, wire, etc.
For this article, we will pick the Web Demo
coordination demo
click on the connection, and test it once more.
You can use DialogFlow from Google, Luis/Bot-service from Azure etc for cloud based solutions or Rasa-Ai for simpler on-prem solutions. So to get started, build a simple Flutter app that has a text box where you can type something. Once you type, let the content flow to server via an explicit submit button for instance and let the NodeJos server ( or any server ) return you a random message. This is your first phase. You then need to replace the request-response scheme with a chatbot.
An example of a bot using Azure system can be found here: https://github.com/pawanit17/EventBright
Another example that uses socket.io based communication to the clients can be found here:
https://github.com/pawanit17/LetsChat-A-Simple-WebSockets-Chatting-App
Experiment with them.

IBM Watson Assistant: How to train the chatbot to pick the right intent?

While developing and testing the conversation, IBM Watson Assistant identifies multiple intents and respond to the one with highest confidence level. Sometimes I want it to respond to second intent not the first one because it is more relevant to the current conversation context. For example, if the dialogue contains nodes to handle making transfer or payment, during the transfer scenario the user can say execute which will match both execute transfer and execute payment. So I want Watson to always respond to execute transfer which is the current context even if it identifies execute payment with higher confidence.
So users ask generic questions assuming that the bot is aware about the current context and will reply accordingly.
For example, assume that I'm developing a FAQ bot to answer inquires about 2 programs Loyalty and Saving. For simplicity I'll assume there are 4 intents
(Loyality-Define - which has examples related to what is the loyalty program)
(Loyality-Join - which has examples related to how to join loyalty program)
(Saving-Define - which has examples related to what is the saving program)
(Saving-Join - which has examples related to how to join saving program)
so users can start the conversation by utterance like "tell me about the loyalty program". then they will ask "how to join" (without mentioning the program assuming that the bot is aware). In that case Watson will identify 2 intents (Loyalty-Join, Saving-Join) and Saving-Join intent may have a higher confidence.
so I need to intercept the dialogue (may be be creating a parent node to check the context and based on that will filter-about the wrong intents).
I couldn't find a way to write code in the dialogue to check the context and modify the intents array so I want to ask about the best practice to do that.
You can't edit the intents object, so it makes what you want to do tricky but not impossible.
In your answer node, add a context variable like $topic. You fill this with a term that will denote the topic.
Then if the users question is not answered, you can check for the topic context and add that to a new context variable. This new variable is then picked up by the application layer to re-ask the question.
Example:
User: tell me about the loyalty program
WA-> Found #Loyality-Define
Set $topic to "loyalty"
Return answer.
User: how to join
WA-> No intent found.
$topic is not blank.
Set $reask to "$topic !! how to join"
APP-> $reask is set.
Ask question "loyalty !! how to join"
Clear $reask and $topic
WA-> Found #Loyalty-join
$topic set to "loyalty"
Return answer
Now in the last situation, if even with the loaded question it is not found, clearing the $topic stops it looping.
The other thing to be aware is that if a user changes topic you must either set the topic or clear it. To prevent it picking the old topics.
NOTE: The question was changed so it is technically a different question. Leaving previous answer below
You can use the intents[] object to analyse the returning the results.
So you can check the confidence difference between the first intent and second intent. If they fall inside a certain range, then you can take action.
Example condition:
intents[0] > 0.24 && intents.[1] - intents[0] > 0.05
This checks if two intents are within 5% of each other. The threshold of 0.24 is to ignore the second intent as it will likely fall below 0.2 which normally means the intent should not be actioned on.
You may want to play with this threshold.
Just to explain why you do this. Look at these two charts. The first one it's clear there is only one question asked. The second chart shows that the two intents are close together.
To take actual action, it's best to have a closed folder (condition = false). In that folder you look for matching intents[1]. This will lower the complexity within the dialog.
If you want something more complex, you can do k-means at the application layer. Then pass back the second intent at the application layer to have the dialog logic take action. There is an example here.
Watson Assistant Plus also does this automatically with the Disambiguation feature.
You can train Watson Assistant to respond accordingly. In the tool where you work on the skill click on the User conversations page in the navigation bar. In the message overview you would need to identify those that have been answered incorrectly and then specify the correct intent. Watson Assistant will pick that up, retrain and then hopefully answer correctly.
In addition, you could revisit how you define the intents. Are the examples like the real user messages? Could you provide more variations? What are the conflicts that make Watson Assistant pick the one, but not the other intent?
Added:
If you want Watson Assistant to "know" about the context, you could extract the current intent and store it as topic in a context variable. Then, if the "join" intent is detected, switch to the dialog node based on intent "join" and the specific topic. For that I would recommend to either have only one intent for "join program" or if really needed, put details about the specifics into the intent. Likely there is not much difference and you end up with just one intent.

How to design stateful conversation with Dialogflow

I'm trying to make an app to reserve meeting rooms in my office by Google Home and Dialogflow.
Here's my plan:
Me: "OK Google, is there any available room now?"
Google Home: "Room 1 is available until 16:00."
Me: "Book it."
Google Home: "Booked Room 1."
The current problem is how to make Google Home's response stateful. In my plan, when I say "Book it", Google Home has to remember Room 1. But I don't know how to make it happen.
I read documents of conversation API, but I haven't understand it's possible to preserve variables or states within the same conversation Id.
https://developers.google.com/actions/reference/v1/conversation
Does anyone know about that?
It is absolutely possible, and even fairly easy, to preserve state as part of the conversation your users have with your Action. Dialogflow makes it particularly easy with what they call context, and it uses this in a few ways:
As part of your fulfillment, you can set a Context, the lifetime (number of steps in the conversation) that context will be good for, and any parameter/value pairs for this context. Using your example, when you have the Action replying with the room number and time, you might set a "pending_request" context with the pair {"room": "1"} and {"time": "2017-11-15T16:00Z"} and a lifetime of 5.
You can indicate as part of the Intent what Contexts must be set for that Intent to be selected in the conversation. So asking "Who is available?" while the "pending_request" context is active might trigger an Intent that looks who is available at that time that can meet at that room (perhaps because they're in the same building). But if the context is not active, it might trigger an Intent to see who is available right now that you can call (even if they're in a different building).
The parameters that you set in the Context are available to you in the Intent that is called. So you'll be able to find out what room and time have been set in the fulfillment of the Intent.
If you don't renew the Context, it will vanish after the selected number of exchanges. This means that after you inquire about the room, you could inquire if you have any appointments today (a question unrelated to the room or the time) and the phrase "Book it" would still have the context available to it.
If you're using the node.js client library from Google, you can use app.getContext() and app.setContext(). If you're doing it in JSON, you need to provide the Context information directly in the response.
Google also provides a more general app.data object that you can set properties on with the node.js client library, and these properties are retained during a conversation (although not between conversations). It uses Contexts behind the scenes, although it isn't quite as powerful as Contexts are since you can't use it as part of Intent matching.
(As an aside - the link you provided was to version 1 of the API. That version has been deprecated and will be turned off in May 2018. It was also for the Actions API rather than Dialogflow. The equivalent documentation is at https://developers.google.com/actions/reference/rest/conversation-webhook, but that probably isn't what you want if you're using Dialogflow.)

How to respond to "help" on Google Assistant?

I got the following feedback from Google team:
When a user says "help" to your agent, it does not actually provide any guidance for what a user can say or ask for, it just says "sure, assistants are here to help"
My webhook is implemented in Spring Boot. Any idea how my web service can respond to help requests?
Since you're using API.AI, that sounds like it might be one of the default responses that are built-in to the Small Talk Domain. You'll probably want to do two things:
Turn off the Small Talk Domain by clicking on the Domains menu on the left and then turning the switch on the Small Talk domain (it should be the first one) off.
Make your own Intent to handle the "help" command (and possibly a few other related statements) by setting these in the User Says section of the Intent. You can have this intent fulfilled by sending it to your webhook by checking the Use Webhook box in the Fulfillment section, but for simple text responses this probably isn't necessary. Just have the Intent return a short help message describing what can be done by adding text to the Response area.
Some suggestions and things to consider when writing your help intent or intents:
Make the response relatively short. This is text that, when read, can't be interrupted.
Consider context-sensitive help by using Input Contexts to determine the state of the conversation at that moment. A user asking for help after a particular prompt should get information that helps them at that prompt.
Allow for multiple ways to ask for help in the User Says section. Phrases like "I'm confused" may also be good to trigger help.
Allow for asking for help on specific topics by using multiple intents that provide different answers. These may be tied to the Contexts as well.

I'm trying to program a unique app, and use voice command to trigger specific functions within the app

If anyone can help me with this, I'd be eternally in their debt.
Without getting bogged down in details, I'm trying to program an app so
that, for instance, while the application is currently launched, if I say the words,
"activate function A", a specific function which already exists in my app, is activated.
Have I explained myself clearly? In other words, on the screen of the phone is a button
which says "function A". When the software is "armed" and in listening mode, I want
the user to have the ability to simply say the words "activate function A",
(or any other phrase of my choice) and the screen option will be selected without requiring
the user to press the button with their hand, but rather, the option is selected/activated
via voice command.
My programmers and I have faced difficulties incorporating this new voice command capability,
even though it is obviously possible to do google searches with voice command, for instance.
Other voice command apps are currently in circulation, such as SMS dictation apps,
email writing apps, etc, so it is clearly possible to create voice command apps.
Does anyone know if this is possible, and if so, do you have advice on how to implement
this function?
QUESTION 2
Assuming that we are unable to activate function A via voice command, is it possible
to use voice command to cause the phone to place a call, and this call is received
by our server? The server then 'pings' the iPhone and instructs it to activate function A?
For this workaround to work, I would need the ability to determine the exact phrase.
In other words, the user can't be forced to use the word "call function A". I need the
ability to select the phrase which launches the function.
Hopefully I've been clear.
In other words, as a potential workaround to the obstacles we've been facing regarding
using voice command to activate a specific function within our app, is it possible
to harness the voice command capability already present in the phone? aka, to place
a phone call? And then this call is received by our server, and the server
accordingly pings the phone which placed the call, and instructs it to activate the function?
I obviously understand the necessity for the app to be currently launched, before it
would be possible for my application to receive the instruction from the server.
If someone can help me to solve this vexing problem, it is not hyperbole to say that
you would change my life!
Thanks so much in advance for any help one of you kind souls can provide!!!
Michael
I don't believe the iPhone comes with any built in speech recognition functions. Consider speaking to Nuance about buying and embedding one of their speech recognition engines. They have DragonDictate for iPhone, but they also provide a fair amount of other recognition engines that serve different functions. Embedded solutions is clearly one of their areas of expertise.
Your other path of pushing the audio to your server may be more involved than you expect. Typically this process involves end-pointing (when is speech present) and identification of basic characteristics so the raw stream doesn't need to be passed. Again, investigation into the speech recognition engine you intend to use may provide you with the data processing details you need. Passing continuous, raw voice from all phones to your servers is probably not going to be practical.