Cant apply for next level rate limit Facebook marketing api - facebook

I have to create script to create adsets and ads for a facebook campaign and I have to do it for a lot of items. For now, i can create every needed entity but there is a big problem, the rate limit. I reach it pretty quick (I can create like 15 items before getting a rate limit exception) and this is very limitating, creating eveything by hand is actually much faster... I want to apply to the next level of rate limitation but I can't. One of my coworker contacted someone from facebook and we were told we did not make any API call using my app ID. Since I am able to create a campaigns, adsets, ads... and we can see those in power editor I don't understand what is going on.
What my dashboard looks like
We will need to be able to create everything using the API really soon so, after some research, I try asking the question here. Did I miss something when creating my app ?

You probably want to go through the official request to promote your app from a Basic level to a Standard level. The level for your app determines how heavily it is rate limited. Details here: https://developers.facebook.com/docs/marketing-api/access
It sounds as if you have not make your official request in app dashboard. It's possible we evaluated your number of API calls before you reached the threshold, or the data we are able to see on your API calls was from an earlier time period when you did not consistently reach the boundary.
You could also be hitting rate limits due to your error rates.
You can apply here, and if needed, reapply: https://www.facebook.com/business/standardadsapi?attachment_canonical_url=https%3A%2F%2Fwww.facebook.com%2Fbusiness%2Fstandardadsapi

Related

How to send custom dimensions, medium, source or referer with an event via Measurement Protocol V2?

With v1 of the measurement protocol, you could use these parameters to add custom dimensions or change medium, source or refer for a page view:
https://ssl.google-analytics.com/collect?v=1&tid=UA-xxxxxxxx&cid=[custom-id]&t=pageview&dp=[Url of pageview]&dh=[hostname of pageview]&cm=[new-medium]&cs=[new-source]&dr=[new-referer]&cd1=[custom-dimension-1]&cd2=[custom-dimension-2]
How is it done in measurement protocol v2?
I couldn't find any documentation about the page-view-event in V2 (for example it's just not mentioned here
https://developers.google.com/analytics/devguides/collection/protocol/ga4/reference/events), even the event-builder (https://ga-dev-tools.web.app/ga4/event-builder/) doesn't support a simple page-view.
So, all I got so far is this:
$data = '
{ "client_id": "'.[custom-id].'",
"events": [
{
"name": "page_view",
"params": {
"page_location": "'.[Url of pageview].'"
}
}
]
}
';
So, what are possible parameters for a page-view-event?
Ok, a few things here right away that you should know if you're playing with MP:
Measurement protocol is a poor name. It implies there's more than one protocol for data gathering. There's none. There is just only one protocol for tracking.
MP2 still largely MP1. Google tries to pose GA4 as a new product, but it's just our old good GA UA with a simplified backend and overengineered front-end that tries to deliver the level of quality Site Catalyst/Omniture/Adobe Analytics have been delivering for a decade. MP is largely the same. dr, cm, cs and a lot of other fields are still there. cds aren't there anymore cuz they're replaced with eps and ups, but more about that a bit later.
GA4 uses this big marketing claim that the new analytics is so wonderfully event-based, unlike the old one. When I dug into why they keep claiming it everywhere, I realized that the only difference is that pageviews are now events. Not much difference really. But yes, a pageview is just an event named page_view. We'll talk about it a bit more later.
Custom dimensions are no more. Now they're called event properties and user properties. The same thing really, Google just tries to make it less obvious that there are no more session level custom dimensions. Or product-level CDs. Though the product level is seemingly on their roadmap.
Make sure you're using the correct measurement id. They made it a lot harder to find it in GA4. It's no longer just the property id visible in the property list, unfortunately.
GA's real-time reports don't include all dimensions, especially if those dimensions are involved in advanced metrics/dimensions calculations. Do not use real time reports for inspecting the content of your events. It's not meant for debugging. It's a vanity report. Still helpful to check the volume of events when you're sending a bunch and expect to see them in GA. Google even has a warning here:
Like the DebugView report, the Realtime report performs limited attribution analysis to ensure responsive reporting. We recommend that you refer to the Acquisition reports for the most accurate attribution information.
Finally, what I often do instead of reading the so-still-unfinished-and-not-really-helpful documentation on MP2, is either use a library like this.
Or, since 1 is the case, I would just implement a moniker tracking in my test GTM, then see what and how it sends to where in the Network debugger and simply reimplement it on my side exactly how GTM does it. No magic involved. Here is how my GTM tag would look like:
With a trigger on any click or any page load. After all is done, I publish the lib. Then I would inject this GTM's code in a local site, or in my test site, or however else you want to test it. And trigger the tag that you need to mimic with MP.
I use this wonderful extension to show all events that fire and their details right in my console.
Now this is how the above tag looks on my test site through the extension:
It's pretty useful.
How do I know that page_referrer is used as dr instead of ep in GTM? Here is the list of the fields that will never be seen as ep. But Google doesn't care enough to map them properly to what these fields are called in MP, so you either have to test, or know, or google it elsewhere.
Finally, here is how the network request looks like:
I published the tag to prod (I keep a test site in prod), so you can go and look at it. Or just find a site that uses GA4 and see its network requests. How does google know that this is a pageview? by the event name: en=page_view
Of course, you do the same with medium and source. Judging from the documentation I've linked to above, the medium and source look like campaign_source and campaign_medium in GTM. GTM maps them accordingly to cs and cm fields. And that's how you know these are the correct mp fields. Give GA time to process these and check on them in a few days.
Good, now this is applicable to the enhanced ecommerce hits too, it's just that they have more variables and data structures in them typically.
Finally, if you want to simulate batch events, you can just make a few tags fire in rapid succession and GTM will neatly pack them in one network request if they fit. You can then digest how the packing is done through the same methods as we do here and simulate.

How can I search for past sent emails with Sendgrid?

As Sendgrid's documentation makes clear, their web GUI activity page is only searchable for the past 7 days.
How do I search for activity from farther in the past?
Web API documentation is here, but I can't find anything about just plain searching for info on sent emails. All I see are endpoints for seeing particular categories of emails' various fates, like blocks, bounces, invalid emails, and "filters", which seem like actions and not like filters.
It's got to be possible to just find info about some particular sent email, right?
It's not possible. As you noted, the documentation clearly states that:
Email activity only shows the most recent 7 days. To access data in
real time, we recommend that you consider implementing our Event
Webhook.
If you want to record all the history associated with your account you should record and save it yourself. You can record all the emails you send provided you have an endpoint to do so. See here: https://sendgrid.com/docs/User_Guide/Settings/parse.html
Later Edit:
"real time" means "as it happens", it does not mean "history searchable at any point in time".
When you use an API, as a developer, the responsibility to log all API calls and responses lies with you. While it's true that bounces aren't necessarily reported in the API call response, the SendGrid API offers several ways in which you can be notified. Personal opinion: I know this functionality is often omitted in the MVP because you need to go to market as soon as possible, but an ELK stack is not that hard to set up.
There are several ways you can look for bounces and other events as you can see here: https://sendgrid.com/docs/Classroom/Track/Bounces/bounce_reports_how_can_i_be_notified.html
Webhook for events: http://sendgrid.com/docs/API_Reference/Webhooks/event.html
Enabling Bounce Forwarding on your account
Bounce API: https://sendgrid.com/docs/API_Reference/Web_API_v3/bounces.html
If you really need to find out what happened on day X with email send Y, you can contact their Support team. They can probably look it up for you.
Personal opinion:
That 7 days is not a random number. I'm willing to bet that SendGrid does in fact log all calls you made but it can't provide them for an earlier time. When you use Facebook API, Twitter API, etc. You don't expect them to provide you with historical data of every API call you made. This is an ungodly amount of data. We're talking about an API that is used to send probably upwards of millions of emails per day, maybe even more. I believe they actually did the math and recalling historical data from earlier would put an unnecessary strain on the system, it would take a long time to answer such a request.
I'm sorry if I went on a bit of a rant but people often don't think about the volume of data needed to store such things and how much it would cost to search it.

Google Places API - How much can I uplift the quota with uplift quota request form?

I am the manager of an iOS application and it uses Google Places API. Right now I am limited to 100,000 requests and during our testing, one or two users could use up to 2000 requests per day (without autocomplete). This means that only about 50 to 200 people will be able to use the app per day before I run out of quota. I know I will need to fill out the uplift request form when the app launches to get more quota but I still feel that I will need a very large quota based on these test results. Can anyone help me with this issue?
Note: I do not want to launch the app until I know I will be able to get a larger quota.
First up, put your review request in sooner rather than later so I have time to review it and make sure it complies with our Terms of Service.
Secondly, how are your users burning 2k requests per day? Would caching results help you lower your request count?
I'm facing the same problem!
Is it possible to use Places library of the Google Maps Javascript API which gives the quota on each end user instead of an API key so that the quota will grow as user grows. See here
Theoretically I think it's possible to do that since it just need a webView or javascript runtime to use the library, but didn't see anyone seems to use this approach.

How do I see the aggregates of my published "news.reads" actions?

I have a Facebook application that wants to publish document reads to a user's OpenGraph.
Since read is a reserved, built-in action, my objects have to have the type article. The publishing of reads to the user's graph works fine and the last read is also shown on the user's timeline.
Additionally, I have set up some aggregators that would show the last 5 reads, the most popular authors etc. The problem is that I can not find those aggregators anywhere in my timeline/profile or in the App section of my user.
Is it not possible to control/show the aggregators for built-in actions and objects?
I have a feeling it should be, since I can set them up and (for example) Spotifiy also uses the built-in music.song objects, as shown below - this is basically, what I also want.
All I am seeing on my app's timeline section, though, is this:
I believe you are not in control of when facebook displays your aggregations as you have defined them in your open graph settings, since facebook uses the so called 'GraphRank' to determine whether to show your aggregation or not. The calculation goes like this:
GraphRank = affinity * weight * interactions * time
affinity (score): this is the relationship between the viewing user and the creator of the action.
weight: if two users interact frequently with each other, the respective actions in the open graph are rated higher than for users who do not have the same interest and are not in close contact on Facebook.
interactions: how often does the user interact with the application and how do friends react to the activities in the social channels (if nobody clicks on the published actions it's bad for the GraphRank).
time: if an app is used irregularly or only once, actions will receive less attention in the long run and will be presented less prominently on the timeline.
See this article: http://www.insidefacebook.com/2011/12/27/edgerank-and-graph-rank-defined/
This is not the perfect answer to the actual question but I was able to solve the problem nevertheless. In case someone else is in the same spot, you might profit from my learnings:
The application I'm building wants to push read actions to a user's OpenGraph. My aggregation problem was that my reads from the built-in news.reads action did not get aggregated. To this day, I do not know why not.
Instead, I managed to create my own read action. It is not connected to the built-in one and exists in my own namespace.
This action can now be connected to my own objects as well and is not bound to the article object – as is the built-in one.
Having my own actions and objects, it was a breeze to follow the instructions for aggregations and create as many aggregations as I like. They also actually show up in my test users' profiles. Yeah.

Facebook application load and performance testing

So I've hit a bit of a dilemma with my application load testing. My application relies on valid Facebook logins as I create shadow records that correspond to the users who log in.
How can I load test my application while using Facebook calls (rather than disabling).
I need to ensure at least 100,000 users can connect without getting bogged down.
My code runs fairly fast so far on since loads I'm averaging 1000 ms pre-caching. But I'd like to do some more load testing before I turn on my cache.
How can I do this?
From what I've come across, everyone seems to say just turn off Facebook calls and load test as if the application was a regular site. Also, I came across something called friendrunner which seemed like it could be the solution to my problem. Except no one from there has gotten back to me as of yet.
You can't. Or rather, you really shouldn't and probably can't anyway. Facebook is one of the more aggressive sites when it comes to introducing measures designed to prevent synthetic (scripted) interaction and if you try to get around these measures you risk Facebook taking measures against you (probably not legal, but they can surely suspend your account and if you have a corporate agreement with them it could get embarrassing).
But this shouldn't be an issue for performance testing. You simply need to spoof the Facebook calls and focus on writing scripts that only call the servers that you want to load test. This is best practice for any project. In the past, I have simply used random strings to simulate the Facebook account id and, where you application requires certain user information from an account, you will need to be slightly more creative and stub this out. As far as I can tell, friendrunner is just that, a Facebook stub.