Handle multiple answers from IBM Cloud Function in Watson Assistant - ibm-cloud

I need to show an unknown number of buttons in a Watson Assistant dialogue node. The data for the buttons comes from a IBM Cloud Function.
If I manually set up a response type "option" answer in my node the JSON-object looks like this:
{
"output": {
"generic": [
{
"title": "Välj mötestyp",
"options": [
{
"label": "Rådgivning familjerätt 30 min",
"value": {
"input": {
"text": "447472"
}
}
},
{
"label": "Rådgivning familjerätt 60 min",
"value": {
"input": {
"text": "448032"
}
}
}
],
"description": "Välj typ av möte du vill boka",
"response_type": "option",
"preference": "dropdown"
}
]
}
}
My cloud function can create this JSON with x no of options. But how can I use this data in Assistant?
The easiest would be to let the cloud function generate the complete JSON and then just output the returned JSON like this:
{
$context.output"
}
..but that's not allowed.
Generated output-object from my function:
[{"serviceId":447472,"serviceName":"Rådgivning Familjerätt 30 min"},{"serviceId":448032,"serviceName":"Rådgivning Familjerätt 60 min"}]
Any advice on how to do this?

I don't see a simple way of generating the entire output and options. What you could do is this:
Generate the option labels and values
Pass them into a generic output node that has predefined structures for 1, 2, 3, etc. options. Check based on the array size of your context variable which predefined response structure to fill.
I tested the following:
{
"context": {"my": [ {
"label": "First option",
"value": "one"
},
{
"label": "Second",
"value": "two" }]},
"output": {
"generic": [
{
"title": "This is a test",
"options": [{"label": "<? $my[0].label ?>",
"value": {
"input": {
"text": "my[0].value"
}
}
},{"label": "<? $my[1].label ?>", "value": {
"input": {
"text": "<? $my[1].value ?>"
}
}
}],
"response_type": "option"
}
]
}
}
It defined a context variable with the options, analogous to the options structure. In the output access the labels and values, later modifying the to proove that they are used and can be modified.

Related

GA4 (Google Analytics) sessions based on UTM params

I am trying to fetch sessions from GA4 which are relevant to specific UTM params.
In GA3 we were able to use segments (sessions::condition::ga:source==X;ga:medium==Y) but I can not find a way to do this on GA4.
POST https://analyticsdata.googleapis.com/v1beta/#{property}:runReport`
Payload like this:
body = {
"metrics": [
{
"name": "sessions::condition::ga:source==X;ga:medium==Y"
}
],
"dimensions": [
{
"name": "date"
}
],
"dateRanges": [
{
"startDate": '2022-01-01',
"endDate": '2022-01-30',
"name": "current_year"
}
]
}
Returns: Field sessions::condition::ga:source==X;ga:medium==Y is not a valid metric.. Is there a way to do this via new API?
Should I use dimension filter to achieve that? I need to query on both source&medium but it is not clear how do I do this?
"dimensionFilter": {
"filter": {
"fieldName": "firstUserMedium",
"stringFilter": {
"value": "Y"
}
}
}
A dimension filter on sessionSource & sessionMedium returns sessions that have those specific utm_source & utm_medium values. See the dimensions & metrics page for a description of these and other dimensions & metrics.
The needed dimension filter is similar to the following. See Dimension Filters in Creating a Report for more info.
"dimensionFilter": {
"andGroup": {
"expressions": [
{
"filter": {
"fieldName": "sessionSource",
"stringFilter": {
"value": "X"
}
}
},
{
"filter": {
"fieldName": "sessionMedium",
"stringFilter": {
"value": "Y"
}
}
}
]
}
},
Segments are not yet available today in the GA4 Data API.
I think you should check the dimensions and metrcis list for GA4 they dont start with ga
POST https://analyticsdata.googleapis.com/v1beta/properties/GA4_PROPERTY_ID:runReport
{
"dateRanges": [{ "startDate": "2020-09-01", "endDate": "2020-09-15" }],
"dimensions": [{ "name": "country" }],
"metrics": [{ "name": "activeUsers" }]
}
Also at this time i don't think it supports segments.

AWS Stepfunction: Substring from input

As a general question is it possible do a substring function within step functions?
I receive the following event:
{
"input": {
"version": "0",
"id": "d9c5fec0-d08d-6abd-4ea5-0107fbbce47d",
"detail-type": "EBS Multi-Volume Snapshots Completion Status",
"source": "aws.ec2",
"account": "12345678",
"time": "2021-11-12T12:08:16Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2::us-east-1:snapshot/snap-0a98c2a42ee266123"
]
}
}
but need the snapshot id as input to DescribeInstances, therefore I need to extract snap-0a98c2a42ee266123 from arn:aws:ec2::us-east-1:snapshot/snap-0a98c2a42ee266123
Is there any simple way to do this within step functions?. That is to say without having to pass it to a lambda or something equally convoluted?
This has recently become possible with the addition of new intrinsic functions. ArrayGetItem gets an item by its index. StringSplit splits a string at a delimeter. Use a Pass state to extract the snapshot name from the resource ARN:
{
"StartAt": "ExtractSnapshotName",
"States": {
"ExtractSnapshotName": {
"Type": "Pass",
"Parameters": {
"input.$": "$.input",
"snapshotName.$": "States.ArrayGetItem(States.StringSplit(States.ArrayGetItem($.input.resources,0), '/'),1)"
},
"ResultPath": "$",
"End": true
}
}
}
output:
{
"input": {
"version": "0",
"id": "d9c5fec0-d08d-6abd-4ea5-0107fbbce47d",
"detail-type": "EBS Multi-Volume Snapshots Completion Status",
"source": "aws.ec2",
"account": "12345678",
"time": "2021-11-12T12:08:16Z",
"region": "us-east-1",
"resources": [
"arn:aws:ec2::us-east-1:snapshot/snap-0a98c2a42ee266123"
]
},
"snapshotName": "snap-0a98c2a42ee266123"
}

Google Action / Dialogflow : how to ask for geolocation

I'm trying to implement a simple app for Google Assistant.
All works fine, but now I have a problem with the "permission" helper :
https://developers.google.com/actions/assistant/helpers#helper_intents
I have an intent connected with webhook to my java application. When an user types a sentence similar to "near to me", I want to ask to him his location and then use lat/lon to perform a search.
es: Brazilian restaurant near to me
my intent "searchRestaurant" is fired
I receive the webhook request and I parse it
if I find a parameter that is connected to a sentence like "near to me", so instead to response with a "Card" or a "List" I return a JSON that represent the helper request :
{
"conversationToken": "[]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"initialPrompts": [
{
"textToSpeech": "PLACEHOLDER_FOR_PERMISSION"
}
],
"noInputPrompts": []
},
"possibileIntents": [
{
"intent": "actions.intent.PERMISSION",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "Posso accedere alla tua posizione?",
"permission": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
]
}
]
}
but something seems to be wrong, and I receive an error:
"{\n \"responseMetadata\": {\n \"status\": {\n \"code\": 10,\n \"message\": \"Failed to parse Dialogflow response into AppResponse because of empty speech response\",\n \"details\": [{\n \"#type\": \"type.googleapis.com/google.protobuf.Value\",\n \"value\": \"{\\"id\\":\\"1cc45c5e-c398-4ea7-98a5-408f31ce142d\\",\\"timestamp\\":\\"2018-08-02T14:45:05.752Z\\",\\"lang\\":\\"it\\",\\"result\\":{},\\"alternateResult\\":{},\\"status\\":{\\"code\\":206,\\"errorType\\":\\"partial_content\\",\\"errorDetails\\":\\"Webhook call failed. Error: Failed to parse webhook JSON response: Cannot find field: conversationToken in message google.cloud.dialogflow.v2.WebhookResponse.\\"},\\"sessionId\\":\\"1533221100163\\"}\"\n }]\n }\n }\n}"
The "conversationToken" is filled, so I don't understand the error message.
Maybe I'm trying to perform the operation in a wrong way.
So, which is the correct way to call this helper?
--> I've created a second intent "askGeolocation" that have "actions_intent_PERMISSION" as "Event", and ... if I understand correctly the documentation, should be trigger if the request for helper is correct.
How can I get this working?
UPDATE :
I find some example of the json response for ask permission and seems that it should be different from the one above that i'm using :
https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/AskForPermission.json
{
"payload": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
so, i've implemented it and now the response seems to be good (no more error on parsing it), but i still receive an error on it validation :
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: "permission: Cannot find field."
so, a problem still persist.
Anyone know the cause?
Thanks
After some tests i found the problem.
I was returning a wrong json repsonse with "permission" instead of "permissions":
"permission**s**": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
So the steps to ask for location are correct. I report them here as a little tutorial in order to help who is facing on it for the first time:
1) In DialogFlow, add some logic to your Intent, in order to understand when is ok to ask to user his location. In my case, i've added a "parameter" that identify sentences like "nearby" and so on.
2) When my Intent is fired i receive to my java application a request like this :
...
"queryResult": {
"queryText": "ristorante argentino qui vicino",
"action": "bycategory",
"parameters": {
"askgeolocation": "qui vicino",
"TipoRistorante": ["ristorante", "argentino"]
},
...
3) If "askgeolocation" parameter is filled, instead to return a "simple message" o other type of message, i return a json for ask the permission to geolocation :
{
"payload": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
4) You MUST have a second Intent that is configured with "actions_intent_PERMISSION " event :
No training phrases
No Action and params
No Responses
But with Fulfillment active :
5) Once your response arrive to Google Assistant this is the message that appear :
6) Now, if user answer "ok" you receive this json on your webhook :
{
"responseId": "32cf46cf-80d8-xxxxxxxxxxxxxxxxxxxxx",
"queryResult": {
"queryText": "actions_intent_PERMISSION",
"action": "geoposition",
"parameters": {
},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [{
"text": {
"text": [""]
}
}],
"outputContexts": [{
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx",
"parameters": {
"PERMISSION": true
}
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}],
"intent": {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx",
"displayName": "geoposition"
},
"intentDetectionConfidence": 1.0,
"languageCode": "it"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"isInSandbox": true,
"surface": {
"capabilities": [{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
}, {
"name": "actions.capability.SCREEN_OUTPUT"
}, {
"name": "actions.capability.AUDIO_OUTPUT"
}, {
"name": "actions.capability.WEB_BROWSER"
}]
},
"requestType": "SIMULATOR",
"inputs": [{
"rawInputs": [{
"inputType": "KEYBOARD"
}],
"arguments": [{
"textValue": "true",
"name": "PERMISSION",
"boolValue": true
}],
"intent": "actions.intent.PERMISSION"
}],
"user": {
"lastSeen": "2018-08-03T08:55:20Z",
"permissions": ["NAME", "DEVICE_PRECISE_LOCATION"],
"profile": {
"displayName": ".... full name of the user ...",
"givenName": "... name ...",
"familyName": "... surname ..."
},
"locale": "it-IT",
"userId": "xxxxxxxxxxxxxxxxxxxxx"
},
"device": {
"location": {
"coordinates": {
"latitude": 45.xxxxxx,
"longitude": 9.xxxxxx
}
}
},
"conversation": {
"conversationId": "xxxxxxxxxxxxxxxxxxxxx",
"type": "ACTIVE",
"conversationToken": "[]"
},
"availableSurfaces": [{
"capabilities": [{
"name": "actions.capability.SCREEN_OUTPUT"
}, {
"name": "actions.capability.AUDIO_OUTPUT"
}, {
"name": "actions.capability.WEB_BROWSER"
}]
}]
}
},
"session": "projects/xxxxxxxxxxxxxxxxxxxxx"
}
that contains, name/surname and latitude/longitude. This information can be saved in your application, in order to not perform again this steps.
I hope this helps.
Davide
In your intent, you can ask for a parameter with a custom Entity. This you can do like this:
entity you can define as "near"
put all the synonyms for near for which you want to trigger location permission in this entity
do not mark this parameter as "required"
do not put any prompt
in the training phrases, add some phrases with this parameter
in your webhook, keep a check on the parameter, if present ask for permission if not continue.
add a permission event to another intent
do your post permission processing in that intent
Entity
Intent
I hope you get it.
There are samples on this topic specifically that guide you through exactly what's needed for requesting permissions in Node and Java.
Note: There are helper intents samples available in Node and Java as well.

How can I create a response with rich messages in Dialogflow?

Currently, When an intent is invoked, I am calling a webhook and getting a response from the web service as json structure show below.
{
"speech": "this text is spoken out loud if the platform supports voice interactions",
"displayText": "this text is displayed visually"
}
This is mere text. Alternatively, What response do I have to get to display a list for example.
I tried the rich message section of dialogflow documentation. Those structures didn't work.
Every time DialogFlow matches an intent, you have the possibility to ask DialogFlow to send a request to a specific endpoint. An endpoint which you’ll obviously have to code.
That will allow you to retrieve the matched intent, as well as the matched parameters and contexts, and do some useful work with those.
How to use Message Object in Dialogflow
Kommunicate - Custom webhook Dialogflow integration example
Example of rich message with webhook
{
"fulfillmentMessages": [{
"payload": {
"message": "Object1",
"platform": "", // Example - Facebook, Slack...etc
{
"name": "Save Promo",
"action": {
"type": "quickReply",
"payload": {
"message": "text will be sent as message",
"replyMetadata": {
"key1": "value1"
}
}
}
},
{
"name": "Save Coupon",
"action": {
"type": "quickReply",
"payload": {
"message": "text will be sent as message",
"replyMetadata": {
"key1": "value1"
}
}
}
}
}, {
"payload": {
"message": "Object2",
"platform": "" // Example - Facebook, Slack...etc
}
}]
}
To add an Actions on Google List as part of the reply, you'll need to use the data field in your response to include a richResponse with what should be said, along with a systemIntent that contains the list information.
You can see more examples at their github example repository, but here is the one for showing a list:
{
"data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Choose an item"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"listSelect": {
"title": "Hello",
"items": [
{
"optionInfo": {
"key": "first title"
},
"description": "first description",
"image": {
"url": "https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png",
"accessibilityText": "first alt"
},
"title": "first title"
},
{
"optionInfo": {
"key": "second"
},
"description": "second description",
"image": {
"url": "https://lh3.googleusercontent.com/Nu3a6F80WfixUqf_ec_vgXy_c0-0r4VLJRXjVFF_X_CIilEu8B9fT35qyTEj_PEsKw",
"accessibilityText": "second alt"
},
"title": "second title"
}
]
}
}
}
}
}
}

Google Actions SDK not passing arguments

I have been unable to get google-actions sdk for node.js to pass arguments. I installed the https://github.com/actions-on-google/actionssdk-eliza-nodejs sample project and noticed arguments are not working for that project either. Any insight?
In the web simulator I entered "i am feeling sad"
Here is the request I get
{
"query": "i am feeling sad",
"accessToken": "**masked**",
"expectUserResponse": true,
"conversationToken": "CiZDIzU4O..."content_copy,
"debugInfo": {
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"user_id": "**masked**"
},
"conversation": {
"conversation_id": "1484523995718",
"type": 2,
"conversation_token": "{\"elizaInstance\":{\"noRandom\":false,\"capitalizeFirstLetter\":true,\"debug\":false,\"memSize\":20,\"version\":\"1.1 (original)\",\"quit\":false,\"mem\":[],\"lastchoice\":[[-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,0,-1],[-1,-1,-1],[-1],[-1,0,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1],[-1,-1,-1,-1],[-1],[-1,-1],[-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,-1,-1],[-1]],\"sentence\":\"i am feeling sad\"}}"
},
"inputs": [
{
"intent": "assistant.intent.action.TEXT",
"raw_inputs": [
{
"input_type": 2,
"query": "i am feeling sad"
}
],
"arguments": [
{
"name": "text",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
}
]
}
}
}
}
text_value should = "sad", not "i am feeling sad" based on eliza.json which has this:
{
"versionLabel": "Eliza v1",
"agentInfo": {
"languageCode": "en-US",
"projectId": "**masked**",
"invocationNames": [
"eliza"
],
"voiceName": "female_1"
},
"actions": [
{
"description": "Start an Eliza consultation",
"initialTrigger": {
"intent": "assistant.intent.action.MAIN"
},
"httpExecution": {
"url": "https://**masked**"
}
},
{
"description": "Deep link to Eliza consultation",
"initialTrigger": {
"intent": "raw.input",
"queryPatterns": [
{
"queryPattern": "my emotional state is $SchemaOrg_Text:text"
},
{
"queryPattern": "I am concerned about $SchemaOrg_Text:text"
},
{
"queryPattern": "I am feeling $SchemaOrg_Text:text"
},
{
"queryPattern": "I need to talk about my feelings"
}
]
},
"httpExecution": {
"url": "**masked**"
}
}
],
"deploymentStatus": {
"state": "NEW"
},
"versionId": "1"
}
The Google Actions SDK will parse patterns and correctly bind arguments for intents that activate your action. For example, if you say
At my action I am feeling sad
You will recieve arguments that look like this
"arguments": [
{
"name": "text",
"raw_text": "sad",
"text_value": "sad"
},
{
"name": "trigger_query",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
However, if you request input from the action by setting expect_user_response to true, the arguments are always returned in raw form using the intent "assistant.intent.action.TEXT". It is up to you to parse those raw arguments as indicated in the other answer to this question.
Also note that as far as I can tell, the Schema_Org types are not returned per the documentation. More information here.
If you are using the Actions SDK, then you have to use your own NLP for understanding the raw user query and for extracting arguments. If you don't have you own NLP, then we recommend that you use API.AI.