I have passed a conversation from a primary application to a secondary application, I used the call https://graph.facebook.com/v2.6/me/pass_thread_control with the following data:
{"recipient": {"id": "xxxxxxxx"},
"target_app_id": xxxxxx,
"metadata": "test to pass to secondary receiver app",
"pass_thread_control": {
"new_owner_app_id": "xxxxx",
"metadata": "metadata to test"
}
}
it has returned true, that is, the secondary application has control.
The problem I have is to return the conversation to the primary application, I am using the call https://graph.facebook.com/v2.6/me/take_thread_control
{
"recipient": {
"id": "xxxxxxxx"
},
"metadata": "additional content that the caller wants to set"
}
and returns:
"error": {
"message": "(# 10) Only Main Receiver can call this API",
"type": "OAuthException",
"code": 10,
"error_subcode": 2018169,
"fbtrace_id": "DnBvWqt / 0bd"
}
What am I doing wrong?
What are the calls I must make consecutively?
How can I know which application has the conversation?
Another thing that I have seen is that I have tried to make this call:
https://graph.facebook.com/v2.6/me/secondary_receivers
and return this message:
"error": {
"message": "(# 10) Only Main Receiver can call this API",
"type": "OAuthException",
"code": 10,
"error_subcode": 2018169,
"fbtrace_id": "Gt1WsVx9W22"
}
Do I need some permission?
take_thread_control and secondary_receivers can only be called by the Primary Receiver app. If you want the Secondary Receiver to pass thread control back to the Primary, you need to call pass_thread_control from the Secondary.
take_thread_control is specifically reserved for the Primary, which pass_thread_control is for primary and secondary.
Related
Im sending whatsapp message from whatsapp business api. API collections i got from facebook-whatsapp docs. Link
My End goal to check blue tick, does recipient has seen the message.
When you call the send message endpoint (POST /v1/messages), if it succeeds (201 created), you will receive a message ID on the return payload (e.g. 12345), like this:
{
"messages": [{
"id": "12345"
}]
}
After that, at some in the future, whatsapp will asynchronously send notifications to the webhook server informing every status changes on that message (sent, delivered, read, failed and deleted). That notifications will be referencing that same message ID informed before (e.g. 12345), like this:
{
"statuses": [{
"id": "",
"recipient_id": "553199999999",
"status": "delivered",
"timestamp": "1650509418",
"type": "message",
"conversation": {
...
},
"pricing": {
...
}
}]
}
(Check https://developers.facebook.com/docs/whatsapp/on-premises/webhooks/outbound for more details).
So, if you need to ensure a message was read, you must capture that sent message ID and then observe all the status change notifications until you receive the proper read status for that very message, with that specific id.
I'm trying to migrate chatbot to use newly introduced Assistant API v2.
My chatbot infrastructure includes Middleware services which modifies the context after getting the response from Watson. In some case I used to remove particular properties from the context and it worked fine. However I noticed that after migration to API v2 this approach does not work anymore as the deleted properties are somehow stored on Watson side.
For example I received following context from Watson:
{
"assistantId": "---",
"sessionId": "---",
"messageInput": {
"Text": "Some text",
"Options": {
"Debug": "true",
"ReturnContext": "true",
"Restart": "false"
}
},
"context":
{
"Global": "null",
"Skills": {
"AdditionalProperties": {
"main skill": {
"user_defined": {
"id": "23",
"description": "Dont know"
},
"system": {---}
}
}
}
}
}
Then I remove 'description' from the context and send request to Watson once more. Surprisingly 'description' is still there with the same value ('Dont know').
Possible solution would be not to remove a property but to set it's value to empty string. But even in this case my dialog does not work correctly as Watson somehow stores the point in dialog it was visiting previously (or not, these are my guesses). I assume it might be related to system.state property which stores an encoded state of the dialog (again - or not).
My question is why is dialog behaving this way?
How does it store the context information so I can't remove properties from user_defined context?
And how can I reset dialog state to initial keeping the same conversation_id (session_id)?
P.S. I'm using Watson API v2: 2020-04-01
We are trying to implement cloud functions in Watson conversation but am receiving message 'Direct CloudFunctions calls are not supported on this platform'. When I googled for the error, I see that the issue could be because the region for WA and the cloud functions are different or not in US South/Germany. But I can confirm that both my WA and cloud functions are in US South.
I was trying in the 'Try out' panel. Below is the mock json editor content for my dialog node.
{
"context": {
"my_credentials": {
"user": "jgjg",
"password": "khk"
}
},
"output": {
"text": {
"values": [
"response text"
]
}
},
"actions": [
{
"name": "/<myIBMCloudOrganizationID>_<myIBMCloudSpace>/get-http-resource/weather",
"type": "server",
"parameters": {
"location": "Austin"
},
"credentials": "$my_credentials",
"result_variable": "$my_result"
}
]
}
Can you pls advise me on what am I doing wrong. Thanks.
I was going through the same issue. Cloud functions are only available in some regions. If your app is hosted in sydney or somewhere you cannot use that service there. Create a new app and set the location to london
Are you sure your user and password are set correctly?
Your user should consist of LETTERS and NUMBERS and some HYPHENS. like so : ...a-32d7-7d...
Your password should be just a string ...gafhWhu6alirEVpD...
Both are found in your api key on your IbmCloudFunctions page : https://console.bluemix.net/openwhisk/learn/api-key
Username is before the : of the api key and
Password after the : of the api key
If you already know this then i'm afraid i dont know how to help you.
Best
I want to create a chatbot with Dialogflow and Google Assistant along with Google Transactions API for enabling a user to order a chocolate box. For now my agent contains the following four intents:
Default Welcome Intent (text response: Hello, do you want to buy a chocolate box?)
Default Fallback Intent
Int1 (training phrase: Yes, I want, fulfilment: enabled webhook call)
Int2 (event: actions_intent_TRANSACTION_REQUIREMENTS_CHECK )
I am using Dialogflow Json instead of Node.js to connect my agent with Transactions API. I want to test that the user meets the transaction requirements (when ordering the chocolate box) by using the actions.intent.TRANSACTION_REQUIREMENTS_CHECK action of Google actions. For this reason, following Google docs, when Int1 is triggered I am using a webhook which connects Google Assistant to the following python script (back-end):
from flask import Flask, render_template, request, jsonify
from flask_cors import CORS
import requests
app = Flask(__name__)
CORS(app)
#app.route("/", methods=['POST'])
def index():
data = request.get_json()
intent = data["queryResult"]["intent"]["displayName"]
if (intent == 'Int1'):
return jsonify({ "data": {
"google": {
"expectUserResponse": True,
"isSsml": False,
"noInputPrompts": [],
"systemIntent": {
"data": {
"#type": "type.googleapis.com/google.actions.v2.TransactionRequirementsCheckSpec",
"paymentOptions": {
"actionProvidedOptions": {
"displayName": "VISA-1234",
"paymentType": "PAYMENT_CARD"
}
}
},
"intent": "actions.intent.TRANSACTION_REQUIREMENTS_CHECK"
}
}
}
})
else:
return jsonify({'message': 'HERE'})
if __name__== "__main__":
app.run(debug=True)
The result in the json response which I receive after actions.intent.TRANSACTION_REQUIREMENTS_CHECK and Int2 are triggered is:
"arguments": [
{
"extension": {
"#type": "type.googleapis.com/google.actions.v2.TransactionRequirementsCheckResult",
"resultType": "OK"
},
"name": "TRANSACTION_REQUIREMENTS_CHECK_RESULT"
}
]
The confusing fact is that even if I send:
{
"displayName": "FALSE",
"paymentType": "PAYMENT_CARD"
}
the response is the same which means that it returns again OK.
When I send something like this
{
"displayName": "FALSE",
"paymentType": "WRONG"
}
then I get an error:
API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: "(payment_options.action_provided_options.payment_type): invalid value "WRONG" for type TYPE_ENUM".
but this is not exactly given by actions.intent.TRANSACTION_REQUIREMENTS_CHECK and Int2 because these two are not triggered so I do not get any json response back with a result different than OK.
Therefore, my question is: In which cases am I going to receive a result from actions.intent.TRANSACTION_REQUIREMENTS_CHECK which is different than OK?
If I am going to get an OK result for anything that I am writing then what is the point of using actions.intent.TRANSACTION_REQUIREMENTS_CHECK?
P.S.
I have in mind the following about actions.intent.TRANSACTION_REQUIREMENTS_CHECK from Google docs:
Note: The actions.intent.TRANSACTION_REQUIREMENTS_CHECK intent is
currently under development and will return a success state regardless
of the user's payment settings and locale. To test out the failure
state scenario, request the intent on a voice-activated speaker.
but still I am not seeing any difference when I using this app on Google Assistant with my voice on my mobile phone.
I think that you we have to return to Google docs to solve this. According to Google docs the possible responses of actions.intent.TRANSACTION_REQUIREMENTS_CHECK are the following: RESULT_TYPE_UNSPECIFIED, OK, USER_ACTION_REQUIRED, ASSISTANT_SURFACE_NOT_SUPPORTED, REGION_NOT_SUPPORTED
(image).
Nothing of them has to do exactly with this:
"paymentOptions": {
"actionProvidedOptions": {
"displayName": "VISA-1234",
"paymentType": "PAYMENT_CARD"
}
}
This is also because your back-end cannot (or it is not even allowed to) directly reach the payment details of the user so the json above is only inserted by your back-end if you know them or in a sense you can write whatever you want. This is only for being displayed at the order preview and it is not cross-checked with the payment details of the user's Google account.
In conclusion, actions.intent.TRANSACTION_REQUIREMENTS_CHECK may only return a non OK status if the result is unspecified (RESULT_TYPE_UNSPECIFIED) or if the user is expected to take action (USER_ACTION_REQUIRED) or if the transactions are not supported on current device/surface (ASSISTANT_SURFACE_NOT_SUPPORTED) or if the transactions are not supported for current region/country (REGION_NOT_SUPPORTED).
I currently use the Facebook Graph API to get a inbox conversation. Most of the messages look like this:
{
"id": "1452301718360191_1407164668",
"from": {
"id": "10203840837848742",
"name": "Øyvind Knobloch-Bråthen"
},
"message": "Some message",
"created_time": "2014-08-04T15:04:28+0000"
}
However, some of the messages in the conversation are images, and they are represented like this:
{
"id": "1452301718360191_1407164668",
"from": {
"id": "10203840837848742",
"name": "Øyvind Knobloch-Bråthen"
},
"created_time": "2014-08-04T15:04:28+0000"
}
So basically what is different is that the message is gone. But since there is no mention of the attachment, or url to the image or anything else I can use, I'm not able to display the image in my app.
So my question is: How can I get a hold of the image (or url to the image). Hopefully it will be available in some way when I have the message ID.
With Graph api V2 the egde to retrieve inbox messages is /id/conversations
id is eitheir a user id or a page id
Each conversation message with attachment should return an "attachments" field.
Then for each message with attachment you should query the url https://api.facebook.com/method/messaging.getattachment?mid=message_id&access_tokes=your_access_token
Note that it does not work with graph api v2.1. Unless you curl the query