Facebook Marketing API (#2654) Account Request Limits Reached, Sub Error Code:1870024 - facebook

We've been using Facebook Marketing API v4.0. For an account, I keep getting error below. We cannot find any info about this sub error on Facebook docs. Our client deleted some Custom Audiences in their account but nothing changed.
{"error":
{"message":"(#2654) Account Request Limits Reached: You've reached the total number of times
you can create a Custom Audience through one or more ad accounts in this business.",
"type":"OAuthException",
"code":2654,
"error_subcode":1870024,
"fbtrace_id":"AmN2PwJB8utcYZCXu2y-9TF"}
}

You can check in the Changelog of the v4.0 API under Breaking Changes
- Ads Management here:
Updated the the rate limit for several areas under Marketing API. This includes:
custom_audience - Per each ad account in a one-hour time period:
Standard Tier Apps: Minimum of 190000 + 40 * Number of Active custom audiences. Maximum of 700000.
Dev Tier Apps: Minimum of 5000 + 40 * Number of Active custom audiences. Maximum of 700000.
You can also check the Business Use Case Rate Limits in the header of the response X-Business-Use-Case-Usage as described here, like:
x-business-use-case-usage: {
"{business-object-id}": [
{
"type": "{rate-limit-type}", //Type of BUC rate limit logic being applied.
"call_count": 100, //Percentage of calls made.
"total_cputime": 25, //Percentage of the total CPU time that has been used.
"total_time": 25, //Percentage of the total time that has been used.
"estimated_time_to_regain_access": 19 //Time in minutes to regain access.
}
],
"66782684": [
{
"type": "ads_management",
"call_count": 95,
"total_cputime": 20,
"total_time": 20,
"estimated_time_to_regain_access": 0
}
],
"10153848260347724": [
{
"type": "ads_management",
"call_count": 97,
"total_cputime": 23,
"total_time": 23,
"estimated_time_to_regain_access": 0
}
],
...
}
Hope this help

Related

Opensearch Failed to set number of replicas due no permissions

I have the problem with running index managment policy for new indices. I get following error on "set number_of_replicas" step:
{
"cause": "no permissions for [indices:admin/settings/update] and associated roles [index_management_full_access, own_index, security_rest_api_access]",
"message": "Failed to set number_of_replicas to 2 [index=sample.name-2022.10.22]"
}
The indices are created by logstash with "sample.name-YYYY.MM.DD" name template, so in the index policy I have "sample.name-*" index pattern.
My policy:
{
"policy_id": "sample.name-*",
"description": "sample.name-* policy ",
"schema_version": 16,
"error_notification": null,
"default_state": "set replicas",
"states": [
{
"name": "set replicas",
"actions": [
{
"replica_count": {
"number_of_replicas": 2
}
}
]
],
"ism_template": [
{
"index_patterns": [
"sample.name-*"
],
"priority": 1
}
]
}
I don't understand the reason of this error.
Am I doing something wrong?
Retry of the policy doesn't work.
The policy works only if I manually reassign it to index by Dashboards or API.
Opensearch version: 2.3.0
First time I created the policy using API under custom internal user with mapped “security_rest_api_access” security role only.
So I added all_access rights to my internal user and re-created policy and it works!
Seems that the policy runs under my internal user, which created it

Batch create transcription always results in: The recordings URI contains invalid data

I would like to use Azure Speech Services Batch Transcription APIs to create a transcription of my audio file. I've already had success using the Speech Service SDK (for Node.js), but was interested in trying out one of the newer features available in v3.1 preview version of the api (displayFormWordLevelTimestampsEnabled), so I figured I had to do use the REST API service to do that.
Overall my problem is that for whatever input I've feed the Create Transcript API for contentUrls, I always end up getting the same error:
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
After a little digging, I found some tips through the Azure portal to use sox to handle transcoding the audio file in the specific format requested.
The specific format they mention in the portal documentation shows:
If you are using REST API, make sure that it uses one of the formats in this table:
Format
Codec
Bit rate
Sample Rate
WAV
PCM
256 kbps
16 kHz, mono
OGG
OPUS
256 kpbs
16 kHz, mono
With the sox specific commands being:
Activity
SoX command
Check the audio file format.
sox --i
Convert the audio file to single channel, 16-bit, 16 KHz.
sox -b 16 -e signed-integer -c 1 -r 16k -t wav .wav
I ran my mp3 through the second command and verified the file with the first, and the contents of the file looks like:
Input File : 'out5.wav'
Channels : 1
Sample Rate : 16000
Precision : 16-bit
Duration : 00:00:30.09 = 481488 samples ~ 2256.97 CDDA sectors
File Size : 963k
Bit Rate : 256k
Sample Encoding: 16-bit Signed Integer PCM
Finally, I uploaded the file to a public S3 bucket, to use as my content url for my request:
POST https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions
{
"contentUrls": [
"https://s3.us-west-1.amazonaws.com/xxxx/out5.wav"
],
"locale": "en-US",
"displayName": "Test"
}
Still it failed with the same error that I posted above. Any insights into what might be wrong? Thanks!
Update:
The answer below mentioned being able to reference a reports.json file on the Get Transcript/Create Transcript api call.
When I use the Create Transcript API my payload is:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked"
},
"lastActionDateTime": "2022-09-13T23:37:09Z",
"status": "NotStarted",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
Calling the Get Transcript I see:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked",
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
},
"lastActionDateTime": "2022-09-13T23:37:22Z",
"status": "Failed",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
And finally looking at the transcript files I'm getting an empty list:
{
"values": []
}
I see no reference to a reports.json, or any data populated here at all.
In many cases you can get a detailed error information by doing a GET on https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/<transcription_id>/files and looking at the report.json that is referenced there.
If that doesn't help, you could post transcription id(s) of failed transcription so someone from the team (I am one of them) can look at the service logs.

How to create a schedule in pagerduty using restApi and python

When through the documentation of pagerduty was but still not able to understand what parameters to send in the request body and also facing trouble in understanding how to make the api request.If any one can share the sample code on making a pagerduty schedule that would help me alot.
Below is the sample code to create schedules in PagerDuty.
Each list can have multiple items (to add more users / layers)
import requests
url = "https://api.pagerduty.com/schedules?overflow=false"
payload={
"schedule": {
"schedule_layers": [
{
"start": "<dateTime>", # Start Time of layer | "start": "2021-01-01T00:00:00+05:30",
"users": [
{
"user": {
"id": "<string>", # ID of user to add in layer
"summary": "<string>",
"type": "<string>", # "type": "user"
"self": "<url>",
"html_url": "<url>"
}
}
],
"rotation_virtual_start": "<dateTime>", # Start of layer | "rotation_virtual_start": "2021-01-01T00:00:00+05:30",
"rotation_turn_length_seconds": "<integer>", # Layer rotation, for multiple user switching | "rotation_turn_length_seconds": <seconds>,
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
"end": "<dateTime>", # End Time of layer | "end": "2021-01-01T00:00:00+05:30",
"restrictions": [
{
"type": "<string>", # To restrict shift to certain timings Weekly daily etc | "type": "daily_restriction",
"duration_seconds": "<integer>", # Duration of layer | "duration_seconds": "300"
"start_time_of_day": "<partial-time>", #Start time of layer | "start_time_of_day": "00:00:00",
"start_day_of_week": "<integer>"
}
],
"name": "<string>", # Name to give Layer
}
]
"time_zone": "<activesupport-time-zone>", # Timezone to set for layer and its timings | "time_zone": "Asia/Kolkata",
"type": "schedule",
"name": "<string>", # Name to give Schedule
"description": "<string>",# Description to give Schedule
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
}
}
headers = {
'Authorization': 'Token token=<Your token here>',
'Accept': 'application/vnd.pagerduty+json;version=2',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, json=payload)
print(response.text)
Best way to do this is to get the postman collection for PagerDuty and edit the request as per your liking. Once you get a successful response, convert that into code using the inbuilt feature of postman.
Using PagerDuty API for scheduling is not easy. Creating new schedule is okaish, but if you decide to update schedule - it is definitely not trivial. You'll probably occur bunch of limitation: number of restriction per layer, must reuse current layers, etc.
As option you can use a python library pdscheduling https://github.com/skrypka/pdscheduling

Extracting Client ID as a custom dimension with audience dimensions through API

I'm looking to get some answers here regarding the matter. There is no formal documentation available, would like some answers to my dilemma. Currently on analytics, I have Client ID setup as a custom dimension, session scope, I'm currently trying to match this Client ID with other dimensions via the Analytics Reporting API v4. (Reason having done so is that because in order for Client ID to be available outside of User Explorer on Analytics, one has to setup a custom dimension for this)
It's come to my attention that when I try to match Client ID, with an Audience Dimension, such as Affinity, nothing comes up. But say I do so with another dimension like PagePath + Affinity, the table exist. So I know that it is possible to pull Audience dimensions with other dimensions and it's possible for me to pull Client ID together with other dimensions. But what I'm trying to understand is why can't I pull Client ID together with Audience dimensions?
Some clarification on the matter would truly be appreciated, thanks.
For example (Can't show everything, but this is the response body of the python script)
In the case that i try to match my custom dimension (Client ID, session scope) with Affinity.
request_report = {
'viewId': VIEW_ID,
'pageSize' : 100000,
'dateRanges': [{'startDate': '2018-12-14',
'endDate': 'today'}],
'metrics': [{'expression': 'ga:users'}
],
'dimensions': [{'name': 'ga:dateHour'},
{'name':'ga:dimension1'},
{'name': 'ga:interestAffinityCategory'}
]
}
response = api_client.reports().batchGet(
body={
'reportRequests': request_report
}).execute()
Output:
ga:dateHour ga:dimension1 ga:interestAffinityCategory ga:users
Changing my dimensions, to pagePath + Affinity
request_report = {
'viewId': VIEW_ID,
'pageSize' : 100000,
'dateRanges': [{'startDate': '2018-12-14',
'endDate': 'today'}],
'metrics': [{'expression': 'ga:users'}
],
'dimensions': [{'name': 'ga:dateHour'},
{'name': 'ga:pagePath'},
{'name': 'ga:interestAffinityCategory'}
]
}
response = api_client.reports().batchGet(
body={
'reportRequests': request_report
}).execute()
Output:
ga:dateHour ga:pagePath ga:interestAffinityCategory ga:users
2018121415 homepage Business Professionals 10
2019011715 join-beta Beauty Mavens 16
2019011715 join-beta Frequently Visits Salons 21
Now say I change my combination to custom dimension + device category
request_report = {
'viewId': VIEW_ID,
'pageSize' : 100000,
'dateRanges': [{'startDate': '2018-12-14',
'endDate': 'today'}],
'metrics': [{'expression': 'ga:users'}
],
'dimensions': [{'name': 'ga:dateHour'},
{'name': 'ga:adContent'},
{'name': 'ga:deviceCategory'}
]
}
response = api_client.reports().batchGet(
body={
'reportRequests': request_report
}).execute()
Output:
ga:dateHour ga:dimension1 ga:adContent ga:deviceCategory ga:users
2018121410 10 ad1 desktop 1
2018121410 111 ad1 mobile 1
2018121410 119 ad4 mobile 1
2018121410 15 ad3 desktop 1
2018121410 157 ad3 mobile 1
In conclusion:
What I'd like to achieve is being able to pair my custom dimensions (Client ID) together with audience dimensions in order to be able to do segmentations. But first things first, if this permutation is not possible, I would like to understand as to why it's not possible? Is this a limitation from the API side? Or is this a policy thing (taking a guess here as I understand that there are identity protection policies)?
The BigQuery integration does not contain demographics/affinitys. The User Interface contains a variety of mechanisms to prevent you from isolating individual users, so in short no.

Facebook Analytics For Apps Export API Request Running For Almost 20 hours

I have created a request to export app events using POST and did get a FB ID for the export job created. Now when I poll using a GET Url as specified in the docs, I see that it shows 'RUNNING', its has been almost 20 hours now, is this normal? below is what gets returned.
{
"id": "231136770698184",
"start_ts": "2017-04-12T23:04:26+0000",
"end_ts": "2017-04-12T23:21:06+0000",
"status": "RUNNING",
"column_names": [
"server_time",
"event_name",
"client",
"app_version",
"numeric_data",
"event_log_time",
"custom1",
"custom2",
"custom3",
"custom4",
"custom5",
"custom6",
"custom7",
"custom8",
"custom9",
"custom10",
"custom11",
"custom12",
"custom13",
"custom14",
"custom15",
"custom16",
"custom17",
"custom18",
"custom19",
"custom20",
"custom21",
"custom22",
"custom23",
"custom24",
"custom25",
"analytics_app_id",
"ad_tracking_enabled",
"usd_amount",
"ext_user_agent",
"ext_device_model",
"ext_device_os",
"timezone",
"ext_carrier",
"screen_dimensions",
"total_disk_gb",
"remaining_disk_gb",
"invoking_ui_element",
"is_device_id_anonymous",
"raw_advertiser_id"
]
}
-Thanks
SM