Using "speechBiasingHints" with Dialogflow Webhook - actions-on-google

First time posting, so feel free to give me feedback if I could improve something about this post... Now on to my question.
I am currently developing a Google Action, the Action will allow the user to define important events, such as Bob's Birthday or Fred's Graduation, and save data about said events. Later, the user will be able to ask for info about the event and get it returned back to them.
I am using the Dialogflow API with "Inline Editor" fulfillment to keep it as simple as possible for right now. The problem I am running into is this, the event has an entity type of #sys.any, so anything the user says is excepted as valid input. I would like some way to bias towards events I already have stored for the user however, so that they are more likely to find the event they are looking for.
I found another answer on here discussing speech biasing (What is meant by speech bias and how to use speechBiasHints in google-actions appResponse) which defined speech biasing as the ability to"influence the speech to text recognition," which is exactly what I believe I want. While that answer provided sample code, it was for the Actions SDK, not the Dialogflow SDK, which I am using.
Can anyone provide an example of how to fill the "speechBiasingHints" section of the ExpectedInput response of the Conversation Webhook using the DialogFlow Webkook?
Note: This is for a student project, and I'm new to developing Google Actions and still very much learning about everything that is capable with Google Actions. Any feedback or suggestions are very welcome.

The question you link to does quite a few things differently than the approach you're taking. The Action SDK provides more low-level control, but doesn't have much Natural Language Processing (NLP) capabilities, which Dialogflow provides.
Dialogflow handles biasing a little differently through the use of Entities, so you don't need to control the speech biasing directly, Dialogflow can handle that for you, to some extent.
Since each user may have a different event name, you'll probably want to use a User Entity, which is an entity you define and then populate on a user-by-user basis through Dialogflow's API. In your sample phrases, you can then use this entity name instead of #sys:any, or create another set of phrases that use this entity in addition.

Related

ChatBot with conditional response flow - Rasa Open Source

I'm working on a Rasa (Open Source) project, I need to represent the diagram flow in a chatbot.
The main problem is following the conditional flow as the user can say yes or no and modify the flow of the conversation.
I would like to know how I could build a chatbot that contemplates all the possibilities represented in the diagram and the others that are outside it, using Rasa.
In other words, a chatbot that responds to the user according to his previous response.
Please.
flowchart
A "solution" I found was to create a story for each possible path, but it is "unfeasible" due to the number of stories. (there are 9 other diagrams like this one).
Regarding the flow and how you can map your designed diagrams into Rasa, you try to use one universal story/rule to make your structure more modular. You can find parts of the flow in your diagram that are being repeated on the other ones also and create your story/rule(s) out of them to be used in different flows. Rasa also supports checkpoint, which let you manage your stories in a more restrictive way.
For getting users' responses and acting accordingly, you need to deploy Rasa forms and action in your story/rule(s) for extracting those 'entities` you want to get from users and manipulate them.

How can i create action using real-time data after Conversational Actions sunset

I am working on an Alexa Skill that provides real-time road information like road closures.
In order to make this work with Google Assistant, I was creating Actions using Conversational Actions. However, when I recently completed the Actions, I learned about the "Conversational Actions sunset overview".
I am currently searching ways to continue developing this feature after Sunset. My requirements are as follows
Users can obtain information by specifying the desired route among
multiple routes in a common operation.
The information provided needs
to be updated in real time, every 5 minutes.
As the method to provide the information, I assume HTTP request, JSON, HTML, etc.
Any help would be appreciated!
Thank you

Watson: Dialogs. Are they required?

We are working on a micro-service that interacts with Watson.
I was presented with the following argument: "There is no need to use dialogs in a Conversation project on Watson. Declaring the intents and entities is just enough to get the work done"
Based on the documentation, I have the impression that using dialogs is a requirement in order to train Watson correctly on how to interpret the combination of intents and entities. Plus, in the Dialog section, you have the chat that allows you to make corrections.
Is there a way that I can confirm that Dialogs are or are not a requirement?
If you plan to just use intents and entities programmatically then you don’t need dialog.
You will need to create one blank dialog, with a condition of true. This is to avoid SPEL errors relating to no node found.
From a programming point of view (and ignoring conversation for a minute), if you need to take action on intents, entities, or change context variables. The recommendation is to do that in dialog. This way your code is not split across two systems, making it easier to maintain.
Probably in this phrase above, the author wants to say that you only need to create #intents and #entities for your Conversation and defining the purpose for your bot, this is true, depends on what you want to do in your bot, cause after it you can just create your dialog flow!
The Dialog section is for you create your Dialog flow, is absolutely needed when you want to create one conversation flow, e.g: one chatbot.
A workspace contains the following types of artifacts:
Intents: An intent represents the purpose of a user's input, such as a
question about business locations or a bill payment. You define an
intent for each type of user request you want your application to
support. In the tool, the name of an intent is always prefixed with
the # character. To train the workspace to recognize your intents, you
supply lots of examples of user input and indicate which intents they
map to.
Entities; An entity represents a term or object that is relevant to
your intents and that provides a specific context for an intent. For
example, an entity might represent a city where the user wants to find
a business location, or the amount of a bill payment. In the tool, the
name of an entity is always prefixed with the # character. To train
the workspace to recognize your entities, you list the possible values
for each entity and synonyms that users might enter.
Dialog: A dialog is a branching conversation flow that defines how
your application responds when it recognizes the defined intents and
entities. You use the dialog builder in the tool to create
conversations with users, providing responses based on the intents and
entities that you recognize in their input.
EDIT:
Likes #Simon O'Doherty said, if your purpose is to just use the Intents and Entities programmatically, then you don't need the Dialog. His answer is complete.
See the documentation for Building a Dialog.
Yes Intents and Entities may seem to be just enough for you, and you can generate the answer - programmatically - based on them. But you should keep in mind that Dialog does not mean Response. I know it's hard to find this clearly in Watson documentation.
If you need to get access to context variables in the next node or dialog, you should define them in slots. Without defining dialogs for each intent, context variables will not be accessible or conveyed in the next dialog.
You need Dialog portion of Watson Conversation service if you want to respond to the user queries.
The Intents and entities are the understanding piece, and the Dialog portion is the response side of the conversation.

How to show campaign based on data tracked/reported by adobe sitecatalyst?

We are implementing SiteCatalyst on flat HTML files. There is a requirement where we need to show campaigns based on the data that we reported from Analytics. e.g. There is a form having multiple fields. If user have not filled the form/or filled the form, we will track this event and report it to omniture. Now if he presses back button without filling the form completely, we need to show him some campaign/offers. The same will happen when he presses the submit button only the campaign will be different this time. Can this be achieved ? Can we integrate sitecatalyst and campaigning ?
I know that the vice-versa is possible. We can track campaigns and report the campaign id's. But is there any way to display offers based on the analytics data. That too in real time.
Any help would be great !
Thanks in advance.
It sounds like what you are looking for is Adobe Target.
Adobe Target is a tool that allows you to do AB/MV testing, but also target visitors by set rules and criteria.
Very simple example:
"If user came from foo.com, show <h1>foo</h1>. If user came from bar.com, show <h1>bar</h1>"
There is a level of integration between Adobe Target and Adobe Analytics. However, it is not real-time for data that has already been collected.
For example, if you have logic that pops s.prop10 on page with "foo" then that can be integrated with Adobe Target and you can setup a rule that says something like "If s.prop10 is 'foo' then show '<h1>foo</h1>'".
But, it does not let you make a rule like "if prop10 was 'foo' for this visitor at any point in the time in the past, show '<h1>foo</h1>'". In other words, there is no real-time evaluation of data already collected on Adobe's servers.
But, if you were simply wanting to make rules based off the current visit, you can store information in cookies look at cookies to make rules in Adobe Target easy enough.
Also note that there are no built-in tools or hooks or methods etc.. for the actions you described. For example, there's no way to natively say in Adobe Target (or Adobe Analytics) "If a visitor clicks the back button or does this other action, track that". You need to write your own code to define those actions and trigger relevant tracking code at relevant times. Adobe Analytics (and other tracking tools) can help automate some basic stuff like simple link clicks or form field focusing - IOW direct 1:1 actions, but baking in complex actions like that is not feasible for a tracking tool, because every site and scenario is unique.
I guess the TL;DR here is that there is no magic wand for this sort of thing, not for Adobe or any other analytics/tracking tool; you're going to have to write your own code (be it server-side, client-side, or mix of both) to meet your business needs.
You can use Reporting API exposed by adobe sitecatalyst.
Through the Reporting API, you’re able to access the reports generated for your Form events. If you’re using SiteCatalyst 15, you’ll be able to generate reports based on segments also. Recently the Reporting API was updated and given the ability to perform multi-level breakdowns across reports. For more information on this method, go to the API documentation within the Adobe Developer Connection.
Sample Real time access API:
// Real-Time Report
// Note the inclusion of "source" equals "realtime"
// Make sure you configure Real-Time reports for the report suite
https://api.omniture.com/admin/1.4/rest/?method=Report.Run
{
"reportDescription": {
"source": "realtime",
"reportSuiteID": "rsid",
"metrics": [
{ "id": "revenue" }
]
}
}

async autocomplete service

Call me crazy, but I'm looking for a service that will deliver autocomplete functionality similar to Google, Twitter, etc. After searching around for 20 min I thought to ask the geniuses here. Ideas?
I don't mind paying, but it would great if free.. Also is there a top notch NLP service that I can submit strings to and get back states, cities, currencies, company names, establishments, etc. Basically I need to take unstructured data (generic search string) and pull out key information with relevant meta-data.
Big challenge, I know.
Sharing solutions I found after further research.
https://github.com/haochi/jquery.googleSuggest
http://shreyaschand.com/blog/2013/01/03/google-autocomplete-api/
If you dont want to implement it yourself, you can use this service called 'Autocomplete as a Service' which is specifically written for these purposes. You can access it here - www.aaas.io.
you can add metadata with each record and it returns metadata along with the matching results. Do check out demo put up on the home page. It has got a very simple API specifically written for autocomplete search
It does support large datasets and you can apply filters as well while searching.
Its usage is simple - Add your data and use the API URL as autocomplete data source.
Disclaimer: I am founder of it. I will be happy to provide this service to you.