I test on https://developers.facebook.com/tools/explorer/
My query: /v2.8/11111111/leadgen_forms?fields=leads_count,leads
Response:
{ "data": [
{
"name": "My Form",
"leads_count": 37,
"leads": {
"data": [
{
"created_time": "2016-12-21T14:50:56+0000",
"id": "10000000000000",
"field_data": [
{
"name": "email",
"values": [
"hidden#gmail.com"
]
},
{
"name": "first_name",
"values": [
"Hidden"
]
},
{
"name": "last_name",
"values": [
"Hidden"
]
}
]
}
],
"paging": {
"cursors": {
"before": "MTc0ODU4Mjg2MjEyODQ1MgZDZD",
"after": "MTc0ODU4Mjg2MjEyODQ1MgZDZD"
}
}
},
"id": "10000000000000"
}
}
Why I get only one lead in data, when I have 37 in leads_count field?
I actually don't see the way you are accessing leads documented anywhere.
In your request, you are accessing the edge of a page, requesting the forms, and within that, requesting the leads.
You should probably be using the edge of the form ID instead to retrieve leads:
curl -G \
-d 'access_token=<ACCESS_TOKEN>' \
https://graph.facebook.com/v2.8/<FORM_ID>/leads
You can also filter them:
curl -G \
-d "filtering=[{'field':'time_created','operator':'GREATER_THAN','value':<TIMESTAMP>}]" \
-d "access_token=<ACCESS_TOKEN>" \
https://graph.facebook.com/<API_VERSION>/<AD_ID>/leads
Related
We have local instance of Marklogic recently downloaded from docker hub with the following bash command:
docker run --name marklogic-test -d -it -p 8000:8000 -p 8001:8001 -p 8002:8002 \
-e MARKLOGIC_INIT=true \
-e MARKLOGIC_ADMIN_USERNAME=admin \
-e MARKLOGIC_ADMIN_PASSWORD='Areally!PowerfulPassword1337' \
marklogicdb/marklogic-db:10.0-9.4-centos-1.0.0-ea4
The "Documents" database contains only two documents:
sample1.json
{
"v1": "1234",
"v2": "ABCD",
"v3": "0123456789"
}
and sample2.json
{
"v1": "5678",
"v2": "EFGH",
"v3": "9876543210"
}
In case we will run the following XQuery in Query Console:
xquery version "1.0-ml";
let $query := cts:and-query((
cts:directory-query("/", "infinity"),
cts:json-property-value-query("v3", "01*", ("wildcarded", "whitespace-sensitive", "punctuation-sensitive"))
))
return (xdmp:to-json($query), cts:search(/,$query))
The result will be expected... It will return only one document:
{
"andQuery": {
"queries": [
{
"directoryQuery": {
"uris": [
"/"
],
"depth": "infinity"
}
},
{
"jsonPropertyValueQuery": {
"property": [
"v3"
],
"value": [
"01*"
],
"options": [
"punctuation-sensitive",
"whitespace-sensitive",
"wildcarded",
"lang=en"
]
}
}
]
}
}
json as
JSON
{
"v1": "1234",
"v2": "ABCD",
"v3": "0123456789"
}
But if I will do the REST API request:
curl --location --request POST 'http://localhost:18000/LATEST/search?format=json' --user 'admin:Areally!PowerfulPassword1337' --header 'Content-Type: application/json' --data-binary "#test_api.json"
where the test_api.json has the following content:
{
"search": {
"ctsquery": {
"andQuery": {
"queries": [
{
"directoryQuery": {
"uris": [
"/"
],
"depth": "1"
}
},
{
"jsonPropertyValueQuery": {
"property": [
"v3"
],
"value": [
"01*"
],
"options": [
"punctuation-sensitive",
"wildcarded",
"whitespace-sensitive",
"lang=en"
]
}
}
]
}
},
"options": {
"return-plan": false,
"return-metrics": true,
"return-facets": true,
"return-query": false,
"transform-results": {
"apply": "raw"
},
"page-length": 10
}
}
}
The answer is like that:
{
"snippet-format": "snippet",
"total": 2,
"start": 1,
"page-length": 10,
"results": [
{
"index": 1,
"uri": "/sample1.json",
"path": "fn:doc(\"/sample1.json\")",
"score": 0,
"confidence": 0,
"fitness": 0,
"href": "/v1/documents?uri=%2Fsample1.json",
"mimetype": "application/json",
"format": "json",
"matches": [
{
"path": "fn:doc(\"/sample1.json\")/object-node()",
"match-text": [
"1234 ABCD 0123456789"
]
}
]
},
{
"index": 2,
"uri": "/sample2.json",
"path": "fn:doc(\"/sample2.json\")",
"score": 0,
"confidence": 0,
"fitness": 0,
"href": "/v1/documents?uri=%2Fsample2.json",
"mimetype": "application/json",
"format": "json",
"matches": [
{
"path": "fn:doc(\"/sample2.json\")/object-node()",
"match-text": [
"5678 EFGH 9876543210"
]
}
]
}
],
"metrics": {
"query-resolution-time": "PT0.000624S",
"snippet-resolution-time": "PT0.005684S",
"total-time": "PT0.00713S"
}
}
For some reason both documents are returned as result!
Even though "confidence" is 0.
How to understand that behavior of MarkLogic searching engine?
Is it a bug of Marklogic REST API or is there something we are missing?
In Query Console, look at the difference when applying the "filtered" vs. "unfiltered" option to your search.
MarkLogic REST API is an "unfiltered" query.
cts:search() by default is a "filtered" query.
A filtered search (the default). Filtered searches eliminate any false-positive matches and properly resolve cases where there are multiple candidate matches within the same fragment. Filtered search results fully satisfy the specified cts:query.
https://docs.marklogic.com/guide/performance/unfiltered#id_89797
An unfiltered search omits the filtering step, which validates whether each candidate fragment result actually meets the search criteria. Unfiltered searches, therefore, are guaranteed to be fast, while filtered searches are guaranteed to be accurate. By default, searches are filtered; you must specify the "unfiltered" option to cts:search to return an unfiltered search.
I'm trying integrate PayPal V2 Onboarding in sandbox.
My call is :
curl -v -X POST https://api-m.sandbox.paypal.com/v2/customer/partner-referrals \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <My-Access-Token> " \
-d '{
"tracking_id": "795_123",
"operations": [
{
"operation": "API_INTEGRATION",
"api_integration_preference": {
"rest_api_integration": {
"integration_method": "PAYPAL",
"integration_type": "THIRD_PARTY",
"third_party_details": {
"features": [
"PAYMENT",
"REFUND"
]
}
}
}
}
],
"products": [
"EXPRESS_CHECKOUT",
"PPPLUS"
],
"legal_consents": [
{
"type": "SHARE_DATA_CONSENT",
"granted": true
}
]
}'
And the response is :
{
"name": "INVALID_REQUEST",
"message": "Request is not well-formed, syntactically incorrect, or violates schema.",
"debug_id": "266c1b0e09a8f",
"information_link": "",
"details": [{
"issue": "INVALID_ARRAY_LENGTH",
"description": "The number of items in an array should not be more than 1",
"field": "/products",
"location": "body"
}],
"links": []
}
Has anyone come up to this error message for "products" array, or is this a PayPal v2 Onboarding bug?
I have a list of Classifications & Sub-classifications in Apache Atlas. Want to delete them & create a new list.
All the other classifications are getting deleted but one of them with name "PII" giving following error when we select Delete Classification.
Error: Given type PII has references
When we do a search via Rest API using below URL:
http://ip.of.atlas:21000/api/atlas/v2/search/basic?classification=PII
Following Result comes:
{
"queryType": "BASIC",
"searchParameters": {
"classification": "PII",
"excludeDeletedEntities": false,
"includeClassificationAttributes": false,
"includeSubTypes": true,
"includeSubClassifications": true,
"limit": 100,
"offset": 0
},
"entities": [
{
"typeName": "hive_table",
"attributes": {
"owner": "nifi",
"createTime": 1557832055000,
"qualifiedName": "demo.test_table#demopilot",
"name": "test_table"
},
"guid": "ecb7bb24-bdde-448c-b718-07273e5ce572",
"status": "DELETED",
"displayText": "test_table",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
},
{
"typeName": "hive_table",
"attributes": {
"owner": "nifi",
"createTime": 1557832055000,
"qualifiedName": "demo.test_table#demopilot",
"name": "test_table"
},
"guid": "ed5a9284-c290-4431-ab76-27b820478e29",
"status": "DELETED",
"displayText": "test_table",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
},
{
"typeName": "hive_column",
"attributes": {
"owner": "nifi",
"qualifiedName": "demo.test_table.traffic_case#demopilot",
"name": "traffic_case"
},
"guid": "73f75a6c-9f4e-41f0-b0ef-6c05ca132639",
"status": "DELETED",
"displayText": "traffic_case",
"classificationNames": [
"PII"
],
"meaningNames": [],
"meanings": []
}
]
}
Questions:
1. Is there a API which help to delete all Classifications irrespective of whether they are attached to Entity or not?
2. Delete Single Classification forcefully with Classification Name or GUID?
Running below GET request:
http://ip.of.atlas:21000/api/atlas/v2/types/typedefs
& then Delete the guid attached to the typedefs
I tested it out and you could use below API to delete a tag:
curl -k -X DELETE --insecure --negotiate -u : --header \
''{"classificationDefs":[{"name":"PII","superTypes":[],"attributeDefs":[]}]}' \
'https://atlas-host:21443/api/atlas/v2/types/typedefs?type=classification'
When creating a subscription, it would be nice to return the subscription ID.
For instance, the following code doesn't return anything :
curl localhost:1026/v2/subscriptions -s -S --header 'Content-Type: application/json' \
-d #- <<EOF
{
"description": "A subscription to get info about Room1",
"subject": {
"entities": [
{
"id": "Room1",
"type": "Room"
}
],
"condition": {
"attrs": [
"pressure"
]
}
},
"notification": {
"http": {
"url": "http://localhost:1028/accumulate"
},
"attrs": [
"temperature"
]
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
EOF
In the subscription case, the resource id is generated server-side (with difference to the entities endpoint, where the id is decided client-side).
It would be nice to return it in the POST call, is there any way to do this?
Subscription ID is retrieved in Location header in the response to the subscription creation request, eg:
Location: /v2/subscriptions/5b991dfa12f473cee6651a1a
More details in the NGSIv2 API specification (check "Create Subscription" section).
Attempting to make a plain GET request to one of the BigQuery REST APIs gives an error that looks like this:
curl https://www.googleapis.com/bigquery/v2/projects/$PROJECT_ID/jobs/$JOBID
Output:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization",
...
What is the correct way to invoke one of the REST APIs from the command-line, such as the query or insert APIs? The API reference has a "Try this API", but the examples don't translate directly to something you can run from the command-line.
As a disclaimer, when working from the command-line, using the bq tool will usually be sufficient, or for more complex use cases, the BigQuery client libraries enable programming with BigQuery from multiple languages. It can still be useful sometimes to make plain requests to the REST APIs to see how certain APIs work at a low level, however.
First, make sure that you have installed the Google Cloud SDK. This should include the gcloud and bq command-line tools. If you haven't already, authorize your account by running this command from your terminal:
gcloud auth login
This should prompt you to log in and then give you an access code that you can paste into your terminal. (The exact process may change over time).
Now let's try a query using the BigQuery REST API, calling the jobs.query method. Modify this script with your own project name, which you can find from the Google Cloud Console, then paste the script into your terminal:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"kind\":\"bigquery#queryRequest\",\"useLegacySql\":false,\"query\":$QUERY}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries
If it worked, you should see output that looks like this:
{
"kind": "bigquery#queryResponse",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<your project ID>",
"jobId": "<your job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": false
}
If you haven't set up the bq command-line tool, you can use bq init from your terminal to do so. Once you have, you can try running the same query using it:
bq query --use_legacy_sql=False "SELECT 1 AS x, 'foo' AS y;"
You can also see the REST API requests that the bq tool makes by passing the --apilog= option:
bq --apilog= query --use_legacy_sql=False "SELECT [1, 2, 3] AS x;"
Now let's try an example using the jobs.insert method instead of the query API. Run this script, replacing YOUR_PROJECT_NAME with your project name:
PROJECT="YOUR_PROJECT_NAME"
QUERY="\"SELECT 1 AS x, 'foo' AS y;\""
REQUEST="{\"configuration\":{\"query\":{\"useLegacySql\":false,\"query\":${QUERY}}}}"
echo $REQUEST | \
curl -X POST -d #- -H "Content-Type: application/json" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs
Unlike the query API, which returned a response immediately, you will see a result that looks similar to this:
{
"kind": "bigquery#job",
"etag": "\"<etag string>\"",
"id": "<project name>:<job ID>",
"selfLink": "https://www.googleapis.com/bigquery/v2/projects/<project name>/jobs/<job ID>",
"jobReference": {
"projectId": "<project name>",
"jobId": "<job ID>"
},
"configuration": {
"query": {
"query": "SELECT 1 AS x, 'foo' AS y;",
"destinationTable": {
"projectId": "<project name>",
"datasetId": "<anonymous dataset>",
"tableId": "<anonymous table>"
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": false
}
},
"status": {
"state": "RUNNING"
},
"statistics": {
"creationTime": "<timestamp millis>",
"startTime": "<timestamp millis>"
},
"user_email": "<your email address>"
}
Notice the status:
"status": {
"state": "RUNNING"
},
If you want to check on the job now, you can use the jobs.get method. Similar to before, run this from your terminal, using the job ID from the output in the previous step:
PROJECT="YOUR_PROJECT_NAME"
JOB_ID="YOUR_JOB_ID"
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/jobs/$JOB_ID
If the query is done, you'll get a response that indicates as much:
...
"status": {
"state": "DONE"
},
...
Finally, we can make a request to fetch the query results, also using the REST API.
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://www.googleapis.com/bigquery/v2/projects/$PROJECT/queries/$JOB_ID
The output will look similar to when we used the jobs.query method above:
{
"kind": "bigquery#getQueryResultsResponse",
"etag": "\"<etag string>\"",
"schema": {
"fields": [
{
"name": "x",
"type": "INTEGER",
"mode": "NULLABLE"
},
{
"name": "y",
"type": "STRING",
"mode": "NULLABLE"
}
]
},
"jobReference": {
"projectId": "<project ID>",
"jobId": "<job ID>"
},
"totalRows": "1",
"rows": [
{
"f": [
{
"v": "1"
},
{
"v": "foo"
}
]
}
],
"totalBytesProcessed": "0",
"jobComplete": true,
"cacheHit": true
}