I'd like to write an automated integration test to test my DialogFlow agent, integrated with Google Assistant.
Right now, I need to go through the flows, typing what the users "says" into the Actions On Google test console.
(I guess I could write a selenium script to do this - but it seems to me there has to be a way to do this by API...)
Although Dialogflow has an API that lets you issue queries against it, this probably hits the general Dialogflow processing and does not specifically represent what the Assistant would send.
I typically suggest testing against your fulfillment service rather than testing Dialogflow's processing itself. Since your fulfillment server has to be an HTTP[S] server, you can build the JSON body yourself, change the parameters as appropriate, and verify the JSON response. If you need, you can manually do it for some inputs to capture what the JSON will look like first.
Related
Is there an API exposed for Actions on Google, similar to what Dialogflow offers with their API? The only API-like flow I have found through my research is this webhook flow API, but that only deals with conversation requests, prompts, and responses, which I have already handled.
Ideally I'd like to be able to dynamically create "agents" and their conversation flows without having to use the AoG console, similar to what Amazon offers with Alexa SMAPI.
There's not a full API to do everything that you want end-to-end. Some parts, like Dialogflow and fulfillment, can be automated, but it will still require some manual work in the Actions Console.
I had a conversation with another developer on this subject once. As a workaround, which is admittedly hacky, they decided to use the Puppeteer library to programmatically control a browser instance to fill in fields and click buttons.
That may not necessarily work when the console changes, and isn't a good substitute for an API, but it may work for you.
Yes you can do it using Google Dialogflow REST API
Here are APIs for the agent :
There are many more APIs available for different operations.
I am a bit confused. The requirement is that we need to create a REST API in Salesforce(Apex class) that has one POST method. Right now, I have been testing it with POSTMAN tool in 2 steps:
Making a POST request first with username, password, client_id, client_secret(that are coming from connected app in Salesforce), grant_type to receive access token.
Then I make another POST request in POSTMAN to create a lead in Salesforce, using the access token I received before and the body.
However, the REST API that I have in Salesforce would be called from various different web forms. So once someone fills out the webform, on the backend it would call this REST API in Salesforce and submits lead request.
I am wondering how would that happen since we can't use POSTMAN for that.
Thanks
These "various different web forms" would have to send requests to Salesforce just like Postman does. You'd need two POST calls (one for login, one to call the service you've created). It'll be bit out of your control, you provided the SF code and proven it works, now it's for these website developers to pick it up.
What's exactly your question? There are tons of libraries to connect to SF from Java, Python, .NET, PHP... Or they could hand-craft these HTTP messages, just Google for "PHP HTTP POST" or something...
https://developer.salesforce.com/index.php?title=Getting_Started_with_the_Force.com_Toolkit_for_PHP&oldid=51397
https://github.com/developerforce/Force.com-Toolkit-for-NET
https://pypi.org/project/simple-salesforce/ / https://pypi.org/project/salesforce-python/
Depending how much time they'll have they can:
cache the session id (so they don't call login every time), try to reuse it, call login again only if session id is blank / got "session expired or invalid" error back
try to batch it somehow (do they need to save these Leads to SF asap or in say hourly intervals is OK? How did YOU write the service, accepts 1 lead or list of records?
be smart about storing the credentials to SF (some secure way, not hardcoded). Ideally in a way that it's easy to use the integration against sandbox or production changing just 1 config file or environment variables or something like that
I want to use Dialogflow fulfillment to connect to an external webservice / API. One way of doing that is to use the custom webhook feature (not the inline web hook). However, when using the custom web hook it seems that you are limited to creating just one even though you may have many intents and you may want to call many endpoints. Is there a way to link to more custom webhooks (API endpoints)?
If you can only set up one web hook then your webserivce will always receive a Post request from Dialogflow and will then need to interpret the body of the request i.e. based on the intent parameter. Just wondering is there a better way to work with REST webservices with Dialogflow.
The other potential option is to use the inline web hook and then put logic in there to call specific endpoints, however, that might get a bit messy.
You can only setup one fulfillment that will handle the processing for all the Intents you've enabled. This can be either the built-in one through the fulfillment editor or at a webhook URL you specify.
That webhook is expected to delegate the actual processing to an Intent Handler of some sort. The Dialogflow node.js fulfillment library has a way to register what handler you want for each Intent name, or you can switch on the Intent name, the Action name, or any other field provided to you in your code.
In the library, you'll typically make the REST calls from an appropriate Intent handler which will take the parameters provided and craft the call. If you are using Javascript, make sure you are handling the call asynchronously and return a Promise.
I recommend a webhook because it gives you more control than the inline editor does. The inline editor is really just a webhook under the covers using Firebase Cloud Functions. Even putting it yourself in a Cloud Function gives you better control over it.
There may be costs depending where you host it, however Firebase has a free tier that is sufficient for testing and light operation. Once your Action is published, you are also eligible for a monthly cloud credit from Google.
I am trying to forward Google Smart Home events to my Dialogflow fulfillment service. I am creating 3 intents with no input or output contexts set, no training phases and with the following events:
action_devices_SYNC
action_devices_EXECUTE
action_devices_QUERY
See also https://imgur.com/a/4eN9S.
Is that correct? I can't find confirmation in the docs, so that's why I am asking it here.
reasoning
The reason why I asked about connecting Google Smart Home with my Dialogflow endpoint is that I already have that endpoint in place. I hoped I could do something similar as in https://stackoverflow.com/a/49119822/9038652, where I bound a Dialogflow intent to the actions_intent_OPTION event.
There isn't a reason to use Dialogflow to do smart home fulfillment, and it's actually not possible.
Dialogflow is great for taking unstructured user utterances and making sense of them. However, with smart home, Google handles all of the NLU and parsing. You, as the integration, will just receive a JSON request and will be expected to provide a JSON response.
So you will skip using Dialogflow and instead just build your webhook to parse the intents and give a valid response.
Dialogflow's service does not have a way to take in an intent name and expose a single endpoint URL that can be called by the Google Assistant. It also does not have integration with an OAuth server to do the account linking step.
I need to process MailGun webhooks. I did implement a solution directly on our web servers to process the webhooks, but MailGun generates so many calls from a large campaign that it effectively becomes a DOS attack.
One solution I've been looking at is using AWS API Gateway to a Lambda function to then push onto an SQS queue. We can then poll the queue at a rate we can manage. Unfortunately we can't get this to work as AWS API Gateway does not support multipart/form-data content types (which some of the webhooks are). This means that our SQS messages are not well formatted / structured. The best we can do is use the $util.escapeJavaScript($input.body) function in the mapping template to create an SQS message that contains the raw string of the webhook content (with escaped javascript chars) that is effectively unparsable i.e. we can't get data out of it.
I've had a go at using Zapier to process the webhook and push directly on the SQS queue. This can parse the various content types effectively and create a nicely structured message for us, but the cost of the service is not viable.
Has anybody managed this problem in another way? Are there solutions to API Gateway not parsing the content properly? I've deliberately stayed away from MailGuns event polling API as it involves significant delays before the polled data can be 'trusted' (according to MailGun).
Basically, is there another way of getting a nicely parsed message from content types multipart/form-data and application/x-www-form-urlencoded onto the queue?
Any ideas would be much appreciated!
To add, this link higlights issues with APS Gateway and multipart\form-data content:
API Gateway - Post multipart\form-data
As you've mentioned you can base64 encode in api gateway and call base64decode in the lambda function to retrieve the original payload (There are standard libraries in every language).
Also, note you can that you can use multipart form data for non file bodies.
Get non file body from multipart/form-data using AWS API Gateway and Lambda
I had the same challenge when building Suet. I ended up switching to Google Cloud functions which I really recommend. Don't waste time on Amazon API Gateway. Use Google Cloud Functions and use a middleware like multer. (You can see the source of Suet's webhook handler here).
Not sure if you ever came to a solution, but I have this working with the following settings.
Setup your API Gateway method to use "Use Lambda Proxy integration"
In your lambda (I use node.js) use busboy to work through the multi-part submission from the mailgun webhook. (use this post for help with busboy Busboy help)
Make sure that any code you are going to execute after all busboy is complete is executed in the 'finish' portion of the busboy code.