Using additional request parameters in Twitter Premium Search API - matlab

I am using the Twitter API from Matlab, specifically by means of the twitter class from the Datafeed Toolbox.
I have essentially followed the example code from the official documentation. I created a Twitter app in my Twitter developer page, and obtained its API keys and access tokens. With those I can use the Twitter Standard search API from Matlab:
c = twitter(consumerkey,consumersecret,accesstoken,accesstokensecret);
% The variables 'consumerkey' etc are defined as character vectors
s = search(c,tweetquery,'count',100); % this works
Now I want to use the Premium search API. This has two endpoints for accessing Tweets:
30-day endpoint: provides Tweets from the previous 30 days.
Full-archive endpoint: provides complete and instant access to Tweets dating all the way back to the first Tweet in March 2006.
In addition, the Premium API has two tiers of access:
Free Sandbox access that enables initial testing and development.
Paid Premium access that provides increased access.
The link above specifies the restrictions associated to sandbox as compared with paid access.
I am trying to use the full-archive endpoint with sandbox access. For that I had to create a developer environment on Twitter, which I named dev.
The search method in Matlab's twitter class (which worked for the Standard access, as described above) doesn't seem to work with the Premium access. But I noticed that search actually calls getdata, and the latter does work for Premium access as follows. First, the Premium access URL needs to be defined:
c.URL = 'https://api.twitter.com/1.1/tweets/search/fullarchive/dev.json';
and then the following syntax works:
s = getdata(c,c.URL,'query','Jimi Hendrix'); % this works
I have also been able to add operators within the query string, for example to specify a range of geographical positions or to restrict the search to tweets that contain images:
s = getdata(c,c.URL,'query','place:"Palo Alto"'); % this works
s = getdata(c,c.URL,'query','Robert Smith bounding_box:[-0.2 51.4 0.1 51.6]') % this works
However—and this is my question—, I haven't been able to use additional request parameters defined in the Twitter API to refine the search, such as fromDate, toDate or maxResults:
s = getdata(c,c.URL,'query','John Frusciante', 'fromDate', '201708130000') % doesn't work
s = getdata(c,c.URL,'query','Rob Scallon', ...
'fromDate', '201708130000', 'toDate', '201708150000') % doesn't work
s = getdata(c,c.URL,'query','Michael Lemmo', 'maxResults', '20') % doesn't work
All of the above return an HTTP/1.1 422 Unprocessable Entity error.
Is my syntax not correct? Maybe the fromDate etc parameters have to be part of the query string? Or maybe the sandbox tier of the Premium search doesn't support those parameters?
For context, I don't really know what all those terms like endpoint, tier, developer environment and token mean, but still I'd like to make this work.

Going by the description at https://developer.twitter.com/en/docs/tweets/search/api-reference/premium-search#DataParameters, what you call 'addition request parameters' are defined for requests of type POST /search/:product. These are HTTP POST requests, can you try using postdata (https://in.mathworks.com/help/datafeed/twitter.postdata.html) instead of getdata. Their usage is almost identical.

Related

Facebook Server-Side API - push custom events to create Custom Audience

We try to use Server-Side API to push custom events to our pixel in order to create event-based Custom Audiences in Facebook Ads (https://developers.facebook.com/docs/marketing-api/facebook-pixel/server-side-api/)
We use _fbp cookie value to match users (it's a first party cookie created on our website by FB pixel).
For example (Python):
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adspixel import AdsPixel
my_app_id = 'X'
my_app_secret = 'X'
my_access_token = 'X'
my_pixel_id = 'X'
FacebookAdsApi.init(access_token=my_access_token, app_id=my_app_id, app_secret=my_app_secret)
fields = []
params = {
'data': [{'event_name': 'icrm_test_20191113_fbp_1m', 'event_time': 1573230217, 'user_data':{'fbp': 'fb.1.1558571054389.1098115397'}}]
}
print(AdsPixel(my_pixel_id).create_event(fields=fields, params=params))
The problem is, when we create a Custom Audience in Facebook Ads, the size of the list is always < 1000, even if we push hundreds of thousands of cookie IDs, which means Facebook matched a very low % of cookies, which were sent.
Custom Audience definition based on server-side event:
List size is always <1000, no matter how many fbp cookies are sent:
It seems like there is some kind of an issue matching _fbp cookies to Facebook user profiles. Is there any known way of improving/fixing matching results? We can't use hashes of sensitive data.
External_id matching (https://developers.facebook.com/docs/marketing-api/facebook-pixel/server-side-api/parameters) also gave us similar results.
The event time you are using there translates to 11/08/2019 # 4:23pm (UTC).
So they're not gonna get included in your 30 day window.
Try importing time
import time
Then set
'event_time': int(time.time())

Facebook InsightsAPI calls limited to 25 entries

I'm pretty new to programming and I'm currently trying to use the InsightsAPI of Facebook in order to extract our performance data. The problem is that the response of the API call is limited to 25 entries.
I use the following code for the call:
String access_token = "xxx";
String ad_account_id = "yyy";
setApp_secret("zzz");
APIContext context = new APIContext(access_token).enableDebug(false);
APINodeList<AdsInsights> response = new AdAccount(ad_account_id, context).getInsights()
.setLevel(AdsInsights.EnumLevel.VALUE_CAMPAIGN)
.setBreakdowns(Arrays.asList(AdsInsights.EnumBreakdowns.VALUE_COUNTRY))
.setTimeRange("{\"since\":\"2017-09-01\",\"until\":\"2017-09-30\"}")
.requestField("account_id")
.requestField("campaign_id")
.requestField("impressions")
.requestField("clicks")
.execute();
How can I extend the limit of the response? I found some information about how to do this via curl but there were no hints on how to do this with java. Would be great if anyone of you could help me!
All the best,
Paul
All the responses of Graph API are paginated which means you will get at most 'x' number of results where 'x' is 25 by default at the moment.
You can specify a higher value using limit param but it is not recommended as it is likely to cause a timeout.
You should look into using pagination instead: https://developers.facebook.com/docs/graph-api/using-graph-api/#paging

How to limit Bing Search API V5 to search specific sections of the website

Using bing.com, I can do a search like this (click here for link):
history site:berkeley.edu/about/
When I try the same using the API, I get very different results. As far as I can tell, the search results returns webpages that are not hosted in berkeley.edu (see bottom).
this is the HTTP GET request being made to Azure
https://api.cognitive.microsoft.com/bing/v5.0/search?q=history+site:berkeley.edu/about/&count=10&offset=0
This is my HTTP GET code
$.ajax({
url: "https://api.cognitive.microsoft.com/bing/v5.0/search"
, data: { "q":encodeURI("history+site:berkeley.edu/about/"), "count":"10", "offset":"0" }
, beforeSend: function(xhrObj){
xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","supply-your-key-here");
}
, type: "GET",
})
Any ideas what I could be doing wrong? Thanks
edit1: Seems my "problem" is related to the way AJAX is making the HTTP request. If I supply my key by using a Firefox header plugin and type this (https://api.cognitive.microsoft.com/bing/v5.0/search?q=history+site:berkeley.edu/about/&count=10&offset=0) on my browser URL box, I get the correct response.
search results using API
Environmental Design Library | UC Berkeley Library: A branch of the UC Berkeley Library system, the Environmental Design Library supports the research and teaching of the College of Environmental Design.
Proceedings Template - WORD - ideals.illinois.edu: "(c) ACM, 2007. This is the authors’ version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution.
Trends in metadata practices: A longitudinal study of ...: Trends in metadata practices: A longitudinal study of collection federation. ... A Longitudinal Study of Collection Federation Carole Palmer Oksana ...
http://aerospaceutility.tripod.com/ · GitHub: Clone via HTTPS Clone with Git or checkout with SVN using the repository's web address.
HS RWC Colorado Sample Instructional Units - LiveBinder: Loading Livebinder HS RWC Colorado Sample Instructional Units HS Read Write Communicate Sample Instructional Units provided by the Colorado Department of Education.
Arroyo High School: News Archive: News Archive SIA Awards "As the school year comes to a close, the Students in Action club would like to honor three students for their lasting impact on our ...
English 12 (exp) | Utah Electronic High School: Please be mindful of the fact that this course is not a credit "quick fix." It is a rigorous, college-preparatory class that is both time and labor intensive.
Working SMARTer, not Harder: SOCIAL STUDIES ONLINE ...: SOCIAL STUDIES ONLINE RESOURCES AND LINKS COMPILATION beta List of Social Studies online resources and links to professional development opportunities ...
The Big List -- 20121008 - Grolier: The Big List -- 20121008: 1: EA: http://www.stanford.edu/group/bipolar.clinic/ Stanford Bipolar Disorders Clinic: 2: EA: http://www.mhsource.com/bipolar/
Spreadsheet of Conference Attendees - studylib.net: ÐÏ à¡± á > þÿ ] þÿÿÿ ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ...
You can use Bing custom search alternately to make sure you get results only from the domain/webpages you want. Here is the call: https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search. You will need a different accesskey though, which you can get from customsearch.ai.
I'm getting correct results on both v5.0 and v7.0.
There seems nothing wrong with your query.
https://api.cognitive.microsoft.com/bing/v5.0/search?q=history+site:berkeley.edu/about/&count=10&offset=0
Perhaps you are caching results somewhere in your browser?
Update: Since IE does it, but Firefox doesn't. Have you disabled cache on IE?

what API Gateway methods support Authorization?

When I create a resource/method in AWS API Gateway API I can create one of the following methods: DELETE, GET, HEAD, OPTIONS, PATCH or POST.
If I choose GET then API Gateway doesn't pass authentication details; but for POST it does.
For GET should I be adding the cognito credentials to the URL of my GET? or just never use GET and use POST for all authenticated calls?
My set-up in API Gateway/Lambda:
I created a Resource and two methods: GET and POST
Under Authorization Settings I set Authorization to AWS_AIM
For this example there is no Request Model
Under Method Execution I set Integration type to Lambda Function and I check Invoke with caller credentials (I also set Lambda Region and Lambda Function)
I leave Credentials cache unchecked.
For Body Mapping Templates, I set Content-Type to `application/json' and the Mapping Template to
{ "identity" : "$input.params('identity')"}
In my Python Lambda function:
def lambda_handler(event, context):
print context.identity
print context.identity.cognito_identity_id
return True
Running the Python function:
For the GET context.identity is None
For the POST context.identity has a value and context.identity.cognito_identity_id has the correct value.
As mentioned in comments: all HTTP methods support authentication. If the method is configured to require authentication, authentication results should be included in the context for you to access via mapping templates to pass down stream as contextual information.
If this is not working for you, please update your question to reflect:
How your API methods are configured.
What your mapping template is.
What results you see in testing.
UPDATE
The code in your lambda function is checking the context of the Lambda function, not the value from API Gateway. To access the value passed in from API Gateway, you would need to use event.identity not context.identity.
This would only half solve your problem as you are not using the correct value to access the identity in API gateway. That would be $context.identity.cognitoIdentityId (assuming you are using Amazon Cognito auth). Please see the mapping template reference for a full guide of supported variables.
Finally, you may want to consider using the template referenced in this question.

Linkedin API oAuth 2.0 REST Query parameters

I'm running into a problem with adding a query to the callback URL. I'm getting an invalid URI scheme error attempting to authorize the following string:
https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=75df1ocpxohk88&scope=rw_groups%20w_messages%20r_basicprofile%20r_contactinfo%20r_network&state=7a6c697d357e4921aeb1ba3793d7af5a&redirect_uri=http://marktest.clubexpress.com/basic_modules/club_admin/website/auth_callback.aspx?type=linkedin
I've read some conflicting information in forum posts here. Some say that it's possible to add query strings to callbacks, and others say that it results in error.
If I remove ?type=linkedin, I can authorize just fine and receive the token. It would make my life so much easier if I could use a query string on the callback url, as I need to do some additional processing in the callback.
In short, can I append a query string to the end of the callback url?
For fun, I tried encoding the callback url in the request (obviously this is a no-no according to their documentation):
https://www.linkedin.com/uas/oauth2/authorization?response_type=code&client_id=75df1ocpxohk88&scope=rw_groups%20w_messages%20r_basicprofile%20r_contactinfo%20r_network&state=5cabef71d89149d48df523558bd12121&redirect_uri=http%3a%2f%2fmarktest.clubexpress.com%2fbasic_modules%2fclub_admin%2fwebsite%2fauth_callback.aspx%3ftype%3dlinkedin
This also resulted in an error but was worth a shot.
The documetation here: https://developer.linkedin.com/forum/oauth-20-redirect-url-faq-invalid-redirecturi-error indicates that you CAN use query parameters. And in the first request, it appears that I'm doing it correctly. Post #25 on this page - https://developer.linkedin.com/forum/error-while-getting-access-token indicates that you have to remove the query parameters to make it work
Anyone have experience with successfully passing additional query paramaters in the callback url for the linkedin API using oAuth2.0? If so, what am I doing wrong?
I couldn't wait around for the Linkedin rep's to respond. After much experimentation, I can only surmise that the use of additional query parameters in the callback is not allowed (thanks for making my application more complicated). As it's been suggested in post #25 from the question, I've tucked away the things I need in the "state=" parameter of the request so that it's returned to my callback.
In my situation, I'm processing multiple API's from my callback and requests from multiple users, so I need to know the type and user number. As a solution, I'm attaching a random string to a prefix, so that I can extract the query parameter in my callback and process it. Each state= will therefore be unique as well as giving me a unique key to cache/get object from cache..
so state="Linkedin-5hnx5322d3-543"
so, on my callback page (for you c# folks)
_stateString=Request["state"];
_receivedUserId = _stateString.Split('-')[2];
_receivedCacheKeyPrefix = _stateString.Split('-')[0];
if(_receivedCacheKeyPrefix == "Linkedin") {
getUserDomain(_receivedUserId);
oLinkedIn.AccessTOkenGet(Request["code"],_userDomain);
if (oLinkedin.Token.Length > 0) {
_linkedinToken = oLinkedin.Token;
//now cache token using the entire _statestring and user id (removed for brevity)
}
You not allowed to do that.
Refer to the doc: https://developer.linkedin.com/docs/oauth2
Please note that:
We strongly recommend using HTTPS whenever possible
URLs must be absolute (e.g. "https://example.com/auth/callback", not "/auth/callback")
URL arguments are ignored (i.e. https://example.com/?id=1 is the same as https://example.com/)
URLs cannot include #'s (i.e. "https://example.com/auth/callback#linkedin" is invalid)