Dialogflow is not detecting the intent with context - rest

I am connecting to Dialogflow REST API v2beta1 to the method: projects.agent.sessions.detectIntent. In the first request I send a text and the response is returning the expected result which contains outputContexts; when I made the 2nd request I send the context and it should find the intent which is linked to that context, but instead of that it is returning the Default Fallback Intent.
What may be the problem on the 2nd request?
Here are the URL and requests with their respective responses, and below I added the screenshots of the intents expected to match.
URL
https://dialogflow.googleapis.com/v2beta1/projects/project-name/agent/sessions/12343:detectIntent
1st request
{
"queryInput":{
"text":{
"text":"play a video about love",
"languageCode":"en"
}
}
}
1st response
{
"responseId": "15a3b767-52fe-4fc2-8ffd-9d7bb9c6961a",
"queryResult": {
"queryText": "play a video about love",
"action": "video.play",
"parameters": {
"organization": "",
"tag": "Love",
"item": ""
},
"allRequiredParamsPresent": true,
"fulfillmentText": "Here is a video about Love!",
"fulfillmentMessages": [
{
"platform": "ACTIONS_ON_GOOGLE",
"simpleResponses": {
"simpleResponses": [
{
"textToSpeech": "Here is a video about Love!"
}
]
}
},
{
"text": {
"text": [
"Here is a video about Love!"
]
}
}
],
"outputContexts": [
{
"name": "projects/project-name/agent/sessions/12343/contexts/play-video",
"lifespanCount": 5,
"parameters": {
"tag": "Love",
"organization": "",
"tag.original": "love",
"item": "",
"organization.original": "",
"item.original": ""
}
}
],
"intent": {
"name": "projects/project-name/agent/intents/9e5d2bbc-81f3-4700-8740-01504b05443f",
"displayName": "video-play"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
}
2nd request (where the problem should be)
{
"queryParams":{
"contexts":[
{
"name":"projects/project-name/agent/sessions/12342/contexts/play-video"
}
]
},
"queryInput":{
"text":{
"text":"that video matters a lot for me",
"languageCode":"en"
}
}
}
2nd response
{
"responseId": "40d1f94f-4673-4644-aa53-99c854ff2596",
"queryResult": {
"queryText": "that video matters a lot for me",
"action": "input.unknown",
"parameters": {},
"allRequiredParamsPresent": true,
"fulfillmentText": "Can you say that again?",
"fulfillmentMessages": [
{
"text": {
"text": [
"Sorry, what was that?"
]
}
}
],
"intent": {
"name": "projects/project-name/agent/intents/10c88e8d-f16a-4905-b829-f596d3b3c588",
"displayName": "Default Fallback Intent",
"isFallback": true
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
}
Screenshots of the intents expected to match
1st intent
2nd intent
Useful documentation
Doc of the method: https://dialogflow.com/docs/reference/api-v2/rest/v2beta1/projects.agent.sessions/detectIntent
Doc of the Context object: https://dialogflow.com/docs/reference/api-v2/rest/Shared.Types/Context
Doc of the Params object to be sent: https://dialogflow.com/docs/reference/api-v2/rest/v2beta1/QueryParameters

It looks like your second request has an incomplete context. Although you're specifying the name, you're not including the lifespanCount parameter. Since you're not providing a parameter, it defaults to 0, which means that it has timed out.
You should send back exactly what you received from the outputContext attribute in the previous reply.
{
"queryParams":{
"contexts":[
{
"name": "projects/project-name/agent/sessions/12343/contexts/play-video",
"lifespanCount": 5,
"parameters": {
"tag": "Love",
"organization": "",
"tag.original": "love",
"item": "",
"organization.original": "",
"item.original": ""
}
}
]
},
"queryInput":{
"text":{
"text":"that video matters a lot for me",
"languageCode":"en"
}
}
}

Related

Possible Notion API bug - Can't create Database Page with URL data

I have a Notion database that I'm trying to write to via the API.
I can write to text fields, but when I try to write to a URL field, I receive the following error:
{
"object": "error",
"status": 400,
"code": "validation_error",
"message": "body failed validation. Fix one:\nbody.properties.O?HT.id should be defined, instead was `undefined`.\nbody.properties.O?HT.name should be defined, instead was `undefined`.\nbody.properties.O?HT.start should be defined, instead was `undefined`."
}
This is the API description of that field:
"LinkedIn": {
"id": "O%3FHT",
"name": "LinkedIn",
"type": "url",
"url": {}
},
This is the body of the API POST request that's failing:
{
"parent": {
"database_id": "f016c02beeff4a34bf298ceb8a8a079f"
},
"properties": {
"title": [
{
"text": {
"content": "Mark Zuckerberg"
}
}
],
"%3DCzE": [
{
"text": {
"content": "CEO"
}
}
],
"HihI": [
{
"text": {
"content": "Facebook"
}
}
],
"JC%3BS": [
{
"text": {
"content": "Menlo Park, CA"
}
}
],
"O%3FHT": {
"url": "https://www.linkedin.com/in/mark-zuckerberg/"
},
"wS%7BN": {
"url": "https://www.linkedin.com/company/facebook/mycompany/verification/"
}
}
}
I'm basing the body contents on the Postman docs provided by Notion: https://www.postman.com/notionhq/workspace/notion-s-api-workspace/documentation/15568543-d990f9b7-98d3-47d3-9131-4866ab9c6df2

Error trying to use an uploaded image in Linkedin

I'm trying to use the Linkedin API to share a post with an image. I've followed all the steps in the documentation:
https://learn.microsoft.com/es-es/linkedin/consumer/integrations/self-serve/share-on-linkedin?context=linkedin/consumer/context#share-media
First, I register the upload with a POST to https://api.linkedin.com/v2/assets?action=registerUpload and this body:
{
"registerUploadRequest": {
"recipes": [
"urn:li:digitalmediaRecipe:feedshare-image"
],
"owner": "urn:li:person:{{owner_ID}}",
"serviceRelationships": [
{
"relationshipType": "OWNER",
"identifier": "urn:li:userGeneratedContent"
}
]
}
}
I get this response:
{
"value": {
"uploadMechanism": {
"com.linkedin.digitalmedia.uploading.MediaUploadHttpRequest": {
"uploadUrl": "https://api.linkedin.com/mediaUpload/C4E22AQF6AS1WhrD1lw/feedshare-uploadedImage/0?ca=vector_feedshare&cn=uploads&m=AQKLg3lSJnswAgAAAXSgLwWp6iAPzjh6E_5XQh8QuP1Aucf_j9bgW3m5vQ&app=74851466&sync=0&v=beta&ut=2h9vtCSeuP0ps1",
"headers": {
"media-type-family": "STILLIMAGE"
}
}
},
"asset": "urn:li:digitalmediaAsset:C4E22AQF6AS1WhrD1lw",
"mediaArtifact": "urn:li:digitalmediaMediaArtifact:(urn:li:digitalmediaAsset:C4E22AQF6AS1WhrD1lw,urn:li:digitalmediaMediaArtifactClass:feedshare-uploadedImage)"
}
}
Then I use the uploadURL to make another post with the image as body, and get a 201 Created response.
I've checked the upload with a GET https://api.linkedin.com/v2/assets/C4E22AQF6AS1WhrD1lw and it has been uploaded correctly:
{
"recipes": [
{
"recipe": "urn:li:digitalmediaRecipe:feedshare-image",
"status": "AVAILABLE"
}
],
"serviceRelationships": [
{
"relationshipType": "OWNER",
"identifier": "urn:li:userGeneratedContent"
}
],
"mediaTypeFamily": "STILLIMAGE",
"created": 1600415270252,
"id": "C4E22AQF6AS1WhrD1lw",
"lastModified": 1600415345915,
"status": "ALLOWED"
}
When I try to share a post referencing this image I get this error:
{"message":"com.linkedin.restli.server.RestLiServiceException [HTTP Status:401]: com.linkedin.content.common.ResponseException: Writers of type person are not authorized to modify UserGeneratedContent.","status":401}
I'm sending a POST to this URL https://api.linkedin.com/v2/ugcPosts with this body:
{
"author": "urn:li:person:{{person_id}}",
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {
"text": "Feeling inspired after meeting so many talented individuals at this year's conference. #talentconnect"
},
"shareMediaCategory": "IMAGE",
"media": [
{
"status": "READY",
"description": {
"text": ""
},
"media": "urn:li:digitalmediaAsset:C4E22AQF6AS1WhrD1lw",
"title": {
"text": ""
}
}
]
}
},
"visibility": {
"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"
}
}
I can share an only text post without trouble, look at my person ID, even like a post, but I'm stuck with this.
Can anyone tell me what am I doing wrong?
Solved. I had an error in the person_id variable, so I was trying to create a post with the wrong id.

Google Action / Dialogflow : how to ask for geolocation

I'm trying to implement a simple app for Google Assistant.
All works fine, but now I have a problem with the "permission" helper :
https://developers.google.com/actions/assistant/helpers#helper_intents
I have an intent connected with webhook to my java application. When an user types a sentence similar to "near to me", I want to ask to him his location and then use lat/lon to perform a search.
es: Brazilian restaurant near to me
my intent "searchRestaurant" is fired
I receive the webhook request and I parse it
if I find a parameter that is connected to a sentence like "near to me", so instead to response with a "Card" or a "List" I return a JSON that represent the helper request :
{
"conversationToken": "[]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"initialPrompts": [
{
"textToSpeech": "PLACEHOLDER_FOR_PERMISSION"
}
],
"noInputPrompts": []
},
"possibileIntents": [
{
"intent": "actions.intent.PERMISSION",
"inputValueData": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "Posso accedere alla tua posizione?",
"permission": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
]
}
]
}
but something seems to be wrong, and I receive an error:
"{\n \"responseMetadata\": {\n \"status\": {\n \"code\": 10,\n \"message\": \"Failed to parse Dialogflow response into AppResponse because of empty speech response\",\n \"details\": [{\n \"#type\": \"type.googleapis.com/google.protobuf.Value\",\n \"value\": \"{\\"id\\":\\"1cc45c5e-c398-4ea7-98a5-408f31ce142d\\",\\"timestamp\\":\\"2018-08-02T14:45:05.752Z\\",\\"lang\\":\\"it\\",\\"result\\":{},\\"alternateResult\\":{},\\"status\\":{\\"code\\":206,\\"errorType\\":\\"partial_content\\",\\"errorDetails\\":\\"Webhook call failed. Error: Failed to parse webhook JSON response: Cannot find field: conversationToken in message google.cloud.dialogflow.v2.WebhookResponse.\\"},\\"sessionId\\":\\"1533221100163\\"}\"\n }]\n }\n }\n}"
The "conversationToken" is filled, so I don't understand the error message.
Maybe I'm trying to perform the operation in a wrong way.
So, which is the correct way to call this helper?
--> I've created a second intent "askGeolocation" that have "actions_intent_PERMISSION" as "Event", and ... if I understand correctly the documentation, should be trigger if the request for helper is correct.
How can I get this working?
UPDATE :
I find some example of the json response for ask permission and seems that it should be different from the one above that i'm using :
https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/AskForPermission.json
{
"payload": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
so, i've implemented it and now the response seems to be good (no more error on parsing it), but i still receive an error on it validation :
UnparseableJsonResponse
API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: "permission: Cannot find field."
so, a problem still persist.
Anyone know the cause?
Thanks
After some tests i found the problem.
I was returning a wrong json repsonse with "permission" instead of "permissions":
"permission**s**": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
So the steps to ask for location are correct. I report them here as a little tutorial in order to help who is facing on it for the first time:
1) In DialogFlow, add some logic to your Intent, in order to understand when is ok to ask to user his location. In my case, i've added a "parameter" that identify sentences like "nearby" and so on.
2) When my Intent is fired i receive to my java application a request like this :
...
"queryResult": {
"queryText": "ristorante argentino qui vicino",
"action": "bycategory",
"parameters": {
"askgeolocation": "qui vicino",
"TipoRistorante": ["ristorante", "argentino"]
},
...
3) If "askgeolocation" parameter is filled, instead to return a "simple message" o other type of message, i return a json for ask the permission to geolocation :
{
"payload": {
"google": {
"expectUserResponse": true,
"systemIntent": {
"intent": "actions.intent.PERMISSION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To deliver your order",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
}
}
}
4) You MUST have a second Intent that is configured with "actions_intent_PERMISSION " event :
No training phrases
No Action and params
No Responses
But with Fulfillment active :
5) Once your response arrive to Google Assistant this is the message that appear :
6) Now, if user answer "ok" you receive this json on your webhook :
{
"responseId": "32cf46cf-80d8-xxxxxxxxxxxxxxxxxxxxx",
"queryResult": {
"queryText": "actions_intent_PERMISSION",
"action": "geoposition",
"parameters": {
},
"allRequiredParamsPresent": true,
"fulfillmentMessages": [{
"text": {
"text": [""]
}
}],
"outputContexts": [{
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx",
"parameters": {
"PERMISSION": true
}
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}, {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx"
}],
"intent": {
"name": "projects/xxxxxxxxxxxxxxxxxxxxx",
"displayName": "geoposition"
},
"intentDetectionConfidence": 1.0,
"languageCode": "it"
},
"originalDetectIntentRequest": {
"source": "google",
"version": "2",
"payload": {
"isInSandbox": true,
"surface": {
"capabilities": [{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
}, {
"name": "actions.capability.SCREEN_OUTPUT"
}, {
"name": "actions.capability.AUDIO_OUTPUT"
}, {
"name": "actions.capability.WEB_BROWSER"
}]
},
"requestType": "SIMULATOR",
"inputs": [{
"rawInputs": [{
"inputType": "KEYBOARD"
}],
"arguments": [{
"textValue": "true",
"name": "PERMISSION",
"boolValue": true
}],
"intent": "actions.intent.PERMISSION"
}],
"user": {
"lastSeen": "2018-08-03T08:55:20Z",
"permissions": ["NAME", "DEVICE_PRECISE_LOCATION"],
"profile": {
"displayName": ".... full name of the user ...",
"givenName": "... name ...",
"familyName": "... surname ..."
},
"locale": "it-IT",
"userId": "xxxxxxxxxxxxxxxxxxxxx"
},
"device": {
"location": {
"coordinates": {
"latitude": 45.xxxxxx,
"longitude": 9.xxxxxx
}
}
},
"conversation": {
"conversationId": "xxxxxxxxxxxxxxxxxxxxx",
"type": "ACTIVE",
"conversationToken": "[]"
},
"availableSurfaces": [{
"capabilities": [{
"name": "actions.capability.SCREEN_OUTPUT"
}, {
"name": "actions.capability.AUDIO_OUTPUT"
}, {
"name": "actions.capability.WEB_BROWSER"
}]
}]
}
},
"session": "projects/xxxxxxxxxxxxxxxxxxxxx"
}
that contains, name/surname and latitude/longitude. This information can be saved in your application, in order to not perform again this steps.
I hope this helps.
Davide
In your intent, you can ask for a parameter with a custom Entity. This you can do like this:
entity you can define as "near"
put all the synonyms for near for which you want to trigger location permission in this entity
do not mark this parameter as "required"
do not put any prompt
in the training phrases, add some phrases with this parameter
in your webhook, keep a check on the parameter, if present ask for permission if not continue.
add a permission event to another intent
do your post permission processing in that intent
Entity
Intent
I hope you get it.
There are samples on this topic specifically that guide you through exactly what's needed for requesting permissions in Node and Java.
Note: There are helper intents samples available in Node and Java as well.

How can I create a response with rich messages in Dialogflow?

Currently, When an intent is invoked, I am calling a webhook and getting a response from the web service as json structure show below.
{
"speech": "this text is spoken out loud if the platform supports voice interactions",
"displayText": "this text is displayed visually"
}
This is mere text. Alternatively, What response do I have to get to display a list for example.
I tried the rich message section of dialogflow documentation. Those structures didn't work.
Every time DialogFlow matches an intent, you have the possibility to ask DialogFlow to send a request to a specific endpoint. An endpoint which you’ll obviously have to code.
That will allow you to retrieve the matched intent, as well as the matched parameters and contexts, and do some useful work with those.
How to use Message Object in Dialogflow
Kommunicate - Custom webhook Dialogflow integration example
Example of rich message with webhook
{
"fulfillmentMessages": [{
"payload": {
"message": "Object1",
"platform": "", // Example - Facebook, Slack...etc
{
"name": "Save Promo",
"action": {
"type": "quickReply",
"payload": {
"message": "text will be sent as message",
"replyMetadata": {
"key1": "value1"
}
}
}
},
{
"name": "Save Coupon",
"action": {
"type": "quickReply",
"payload": {
"message": "text will be sent as message",
"replyMetadata": {
"key1": "value1"
}
}
}
}
}, {
"payload": {
"message": "Object2",
"platform": "" // Example - Facebook, Slack...etc
}
}]
}
To add an Actions on Google List as part of the reply, you'll need to use the data field in your response to include a richResponse with what should be said, along with a systemIntent that contains the list information.
You can see more examples at their github example repository, but here is the one for showing a list:
{
"data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Choose an item"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"listSelect": {
"title": "Hello",
"items": [
{
"optionInfo": {
"key": "first title"
},
"description": "first description",
"image": {
"url": "https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png",
"accessibilityText": "first alt"
},
"title": "first title"
},
{
"optionInfo": {
"key": "second"
},
"description": "second description",
"image": {
"url": "https://lh3.googleusercontent.com/Nu3a6F80WfixUqf_ec_vgXy_c0-0r4VLJRXjVFF_X_CIilEu8B9fT35qyTEj_PEsKw",
"accessibilityText": "second alt"
},
"title": "second title"
}
]
}
}
}
}
}
}

Google Actions SDK not passing arguments

I have been unable to get google-actions sdk for node.js to pass arguments. I installed the https://github.com/actions-on-google/actionssdk-eliza-nodejs sample project and noticed arguments are not working for that project either. Any insight?
In the web simulator I entered "i am feeling sad"
Here is the request I get
{
"query": "i am feeling sad",
"accessToken": "**masked**",
"expectUserResponse": true,
"conversationToken": "CiZDIzU4O..."content_copy,
"debugInfo": {
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"user_id": "**masked**"
},
"conversation": {
"conversation_id": "1484523995718",
"type": 2,
"conversation_token": "{\"elizaInstance\":{\"noRandom\":false,\"capitalizeFirstLetter\":true,\"debug\":false,\"memSize\":20,\"version\":\"1.1 (original)\",\"quit\":false,\"mem\":[],\"lastchoice\":[[-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,0,-1],[-1,-1,-1],[-1],[-1,0,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1],[-1,-1,-1,-1],[-1],[-1,-1],[-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,-1,-1],[-1]],\"sentence\":\"i am feeling sad\"}}"
},
"inputs": [
{
"intent": "assistant.intent.action.TEXT",
"raw_inputs": [
{
"input_type": 2,
"query": "i am feeling sad"
}
],
"arguments": [
{
"name": "text",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
}
]
}
}
}
}
text_value should = "sad", not "i am feeling sad" based on eliza.json which has this:
{
"versionLabel": "Eliza v1",
"agentInfo": {
"languageCode": "en-US",
"projectId": "**masked**",
"invocationNames": [
"eliza"
],
"voiceName": "female_1"
},
"actions": [
{
"description": "Start an Eliza consultation",
"initialTrigger": {
"intent": "assistant.intent.action.MAIN"
},
"httpExecution": {
"url": "https://**masked**"
}
},
{
"description": "Deep link to Eliza consultation",
"initialTrigger": {
"intent": "raw.input",
"queryPatterns": [
{
"queryPattern": "my emotional state is $SchemaOrg_Text:text"
},
{
"queryPattern": "I am concerned about $SchemaOrg_Text:text"
},
{
"queryPattern": "I am feeling $SchemaOrg_Text:text"
},
{
"queryPattern": "I need to talk about my feelings"
}
]
},
"httpExecution": {
"url": "**masked**"
}
}
],
"deploymentStatus": {
"state": "NEW"
},
"versionId": "1"
}
The Google Actions SDK will parse patterns and correctly bind arguments for intents that activate your action. For example, if you say
At my action I am feeling sad
You will recieve arguments that look like this
"arguments": [
{
"name": "text",
"raw_text": "sad",
"text_value": "sad"
},
{
"name": "trigger_query",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
However, if you request input from the action by setting expect_user_response to true, the arguments are always returned in raw form using the intent "assistant.intent.action.TEXT". It is up to you to parse those raw arguments as indicated in the other answer to this question.
Also note that as far as I can tell, the Schema_Org types are not returned per the documentation. More information here.
If you are using the Actions SDK, then you have to use your own NLP for understanding the raw user query and for extracting arguments. If you don't have you own NLP, then we recommend that you use API.AI.