Facebook Analytics For Apps Export API Request Running For Almost 20 hours - facebook

I have created a request to export app events using POST and did get a FB ID for the export job created. Now when I poll using a GET Url as specified in the docs, I see that it shows 'RUNNING', its has been almost 20 hours now, is this normal? below is what gets returned.
{
"id": "231136770698184",
"start_ts": "2017-04-12T23:04:26+0000",
"end_ts": "2017-04-12T23:21:06+0000",
"status": "RUNNING",
"column_names": [
"server_time",
"event_name",
"client",
"app_version",
"numeric_data",
"event_log_time",
"custom1",
"custom2",
"custom3",
"custom4",
"custom5",
"custom6",
"custom7",
"custom8",
"custom9",
"custom10",
"custom11",
"custom12",
"custom13",
"custom14",
"custom15",
"custom16",
"custom17",
"custom18",
"custom19",
"custom20",
"custom21",
"custom22",
"custom23",
"custom24",
"custom25",
"analytics_app_id",
"ad_tracking_enabled",
"usd_amount",
"ext_user_agent",
"ext_device_model",
"ext_device_os",
"timezone",
"ext_carrier",
"screen_dimensions",
"total_disk_gb",
"remaining_disk_gb",
"invoking_ui_element",
"is_device_id_anonymous",
"raw_advertiser_id"
]
}
-Thanks
SM

Related

Unable to upload my application to Meta Quest (App Lab)

I developed an app and wanted to upload it to the facebook Quest(App Lab) platform, but I tried many methods and failed to upload it
Here's what I've tried and failed
The upload button in the app is grayed out, and the MQDH cannot be uploaded
Using the drag and drop upload function of the latest version of MQDH, it prompts that the CLI tool does not exist or has no permission
Using the ovr-platform-util command line upload method, it prompts "An unexpected error occurred"
Here is the log content:
Server log: {
"app_id": 8618231871550973,
"client": "COMMAND_LINE",
"log_level": "ERROR",
"event_name": "UNEXPECTED_ERROR",
"stack_trace": "RangeError [ERR_OUT_OF_RANGE]: The value of \"offset\" is out of range. It must be >= 0 and <= 1. Received 2\n at new NodeError (node:internal/errors:371:5)\n at boundsError (node:internal/buffer:86:9)\n at Buffer.readUInt16LE (node:internal/buffer:243:5)\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\n at Object.set extra [as extra] (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at p (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at Object.get entries [as entries] (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at Object.getEntries (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\n at /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js",
"extra": "{\"caught\":true,\"os\":\"{\\\"platform\\\":\\\"darwin\\\",\\\"arch\\\":\\\"x64\\\",\\\"type\\\":\\\"Darwin\\\"}\",\"cli_version\":\"1.81.0.000001\",\"compatibility_version\":2,\"session_id\":\"8618231871550973_2022-12-13T07:26:42.217Z\",\"command\":\"upload-quest-build\",\"app_id\":8618231871550973,\"platform\":\"ANDROID_6DOF\"}",
"platform": "ANDROID_6DOF",
"cli_version": "1.81.0.000001",
"session_id": "8618231871550973_2022-12-13T07:26:42.217Z",
"binary_id": ""
}
Server log: {
"app_id": 8618231871550973,
"client": "COMMAND_LINE",
"log_level": "DEBUG",
"event_name": "COMMAND_FAILED",
"stack_trace": "at Object._log (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\nat Object.debug (/snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js)\nat /snapshot/Users/facebook/.scratch/dataZsandcastleZboxesZeden-trunk-hg-full-fbsource/edenfsZredirectionsZarvrZjsZtemp/build-ovr-platform-util/lib/ovr_platform_util.js\nat processTicksAndRejections (node:internal/process/task_queues:96:5)",
"extra": "{\"code\":\"ERR_OUT_OF_RANGE\",\"os\":\"{\\\"platform\\\":\\\"darwin\\\",\\\"arch\\\":\\\"x64\\\",\\\"type\\\":\\\"Darwin\\\"}\",\"cli_version\":\"1.81.0.000001\",\"compatibility_version\":2,\"session_id\":\"8618231871550973_2022-12-13T07:26:42.217Z\",\"command\":\"upload-quest-build\",\"app_id\":8618231871550973,\"platform\":\"ANDROID_6DOF\"}",
"platform": "ANDROID_6DOF",
"cli_version": "1.81.0.000001",
"session_id": "8618231871550973_2022-12-13T07:26:42.217Z",
"binary_id": ""
}

Batch create transcription always results in: The recordings URI contains invalid data

I would like to use Azure Speech Services Batch Transcription APIs to create a transcription of my audio file. I've already had success using the Speech Service SDK (for Node.js), but was interested in trying out one of the newer features available in v3.1 preview version of the api (displayFormWordLevelTimestampsEnabled), so I figured I had to do use the REST API service to do that.
Overall my problem is that for whatever input I've feed the Create Transcript API for contentUrls, I always end up getting the same error:
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
After a little digging, I found some tips through the Azure portal to use sox to handle transcoding the audio file in the specific format requested.
The specific format they mention in the portal documentation shows:
If you are using REST API, make sure that it uses one of the formats in this table:
Format
Codec
Bit rate
Sample Rate
WAV
PCM
256 kbps
16 kHz, mono
OGG
OPUS
256 kpbs
16 kHz, mono
With the sox specific commands being:
Activity
SoX command
Check the audio file format.
sox --i
Convert the audio file to single channel, 16-bit, 16 KHz.
sox -b 16 -e signed-integer -c 1 -r 16k -t wav .wav
I ran my mp3 through the second command and verified the file with the first, and the contents of the file looks like:
Input File : 'out5.wav'
Channels : 1
Sample Rate : 16000
Precision : 16-bit
Duration : 00:00:30.09 = 481488 samples ~ 2256.97 CDDA sectors
File Size : 963k
Bit Rate : 256k
Sample Encoding: 16-bit Signed Integer PCM
Finally, I uploaded the file to a public S3 bucket, to use as my content url for my request:
POST https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions
{
"contentUrls": [
"https://s3.us-west-1.amazonaws.com/xxxx/out5.wav"
],
"locale": "en-US",
"displayName": "Test"
}
Still it failed with the same error that I posted above. Any insights into what might be wrong? Thanks!
Update:
The answer below mentioned being able to reference a reports.json file on the Get Transcript/Create Transcript api call.
When I use the Create Transcript API my payload is:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked"
},
"lastActionDateTime": "2022-09-13T23:37:09Z",
"status": "NotStarted",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
Calling the Get Transcript I see:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked",
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
},
"lastActionDateTime": "2022-09-13T23:37:22Z",
"status": "Failed",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
And finally looking at the transcript files I'm getting an empty list:
{
"values": []
}
I see no reference to a reports.json, or any data populated here at all.
In many cases you can get a detailed error information by doing a GET on https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/<transcription_id>/files and looking at the report.json that is referenced there.
If that doesn't help, you could post transcription id(s) of failed transcription so someone from the team (I am one of them) can look at the service logs.

Azure Pipelines: Bulk approve of deployments to environments

Is there any way to approve runs via the CLI or the API (or anything else)? I'm looking for a way to bulk approve multiple runs from different pipelines but it's not available in the UI.
Let's say I have 100 pipelines that have a deployment job to a production environment. I would like to approve all awaiting for approval runs.
Currently, I cannot find something like it in the docs of the Azure DevOps REST API or the CLI.
The feature docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals
The following question is related but I'm looking for any way of solving it but not just via API:
Approve a yaml pipeline deployment in Azure DevOps using REST api
I was just searching for an answer for this regarding getting the approval id that you would need. In fact there is an undocumented API to approve an approval check.
This is as Merlin explain the following
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
The body has to look like this
[{
"approvalId": "{approvalId}",
"status": {approvalStatus},
"comment": ""
}]
where {approvalStatus} is telling the API if you approved or not. You probly have to try, but I had a 4 as a status. I guess there are only 2 possibilities. Either for "approved" or "denied".
The question is now how you get the approval ID? I found it. You get it by using the timeline API of a classic build. The build API documentation says that you get it by the following
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}?api-version=5.1
the build timeline you get in the response of the build run, but it has a pattern which is
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/Timeline?api-version=5.1
Besides a flat array container a parent / child rleationship from stage, phase, job and tasks, you can find within it something like the following:
{
"records": [
{
"previousAttempts": [
],
"id": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"parentId": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"type": "Checkpoint",
"name": "Checkpoint",
"startTime": "2020-08-14T13:44:03.05Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 73,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Checkpoint"
},
{
"previousAttempts": [
],
"id": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"parentId": null,
"type": "Stage",
"name": "Power Platform Test (orgf92be262)",
"startTime": null,
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "pending",
"result": null,
"resultCode": null,
"changeId": 1,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"order": 2,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Import_Test"
},
{
"previousAttempts": [
],
"id": "e54149c5-b5a7-4b82-8468-56ad493224b5",
"parentId": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"type": "Checkpoint.Approval",
"name": "Checkpoint.Approval",
"startTime": "2020-08-14T13:44:03.02Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 72,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "e54149c5-b5a7-4b82-8468-56ad493224b5"
}
],
"lastChangedBy": "00000002-0000-8888-8000-000000000000",
"lastChangedOn": "2020-08-14T13:44:03.057Z",
"id": "86fb4204-9c5e-4e72-bdb1-eefe230480ec",
"changeId": 73,
"url": "https://dev.azure.com/***"
}
below you can see a step that is called "Checkpoint.Approval". The id of that step IS the approval Id you need to approve everything. If you want to know from which stage the approval is, then you can follow up the parentIds until the parentId property is null.
This will then be the stage.
With this you can successfully get the approval id and use it to approve with the said
What jessehouwing's guess is correct. Now multi-stage still be in preview, and the corresponding SDK/API/extension hasn't been expanded and provided to public.
You may think that what about not using API. I have checked the corresponding code from our backend, all of operations to multi-stage approval contain one required parameter: approvalId. I'm sure you have known that this value is unique and different approval map with different approvalId value. This means, no matter which method you want to try with, approvalId is the big trouble. And based on my known, until now, there's no any api/SDK, third tool or extension can achieve this value directly.
In addition, for multi-stage YAML, its release process logic is not same with the release that defined with UI. So, all of public APIs which can work with release(UI), are not suitable with the release of multi-stage.
We have one undisclosed api, can get Approval message of multi-stage:
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
You can try with listing approval without specifying approvalId: https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals. And its response message: Query for approvals failed. A minimum of one query parameter is required.\r\nParameter name: queryParameters. This represents you must tell system the specified approval(the big trouble I mentioned previously).
In fact, for why approvalId is a necessary part, it is caused from our backend code structure. I'd suggest you raise suggestion on developing API/SDK for multi-stage here.
I can confirm that Sebastian's answer worked for me, even in Azure DevOps 2020 on-prem.
After retrieving the approvalId from either methods used above (I was specifically using a service hook for my integration), I used the following API PATCH call:
https://dev.azure.com/{organization}/{project}/_apis/pipelines/approvals/?api-version=6.0-preview
and in the body:
[
{
"approvalId": "{approvalId}",
"status": {status integer}, (4 - approved; 8 - rejected)
"comment": ""
}
]
The call is passed with the application/json Content-Type, but in some situations it did not like that I was using the [] brackets, so you will need to work around that, only then will the call work.
I was even able to integrate this call into my custom connector in MS Power Automate
I added support to the latest version of the AzurePipelinesPS Powershell module to support bulk pipeline approvals.
Code snippet without using the AzurePipelinesPS sessions
$instance = 'https://dev.azure.com'
$collection = 'your_project'
$project = 'your_project'
$apiVersion = '5.1-preview'
$securePat = 'your_personal_access_token' | ConvertTo-SecureString -Force -AsPlainText
Get-APPipelinePendingApprovalList -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion | Out-GridView -Passthru | % { Update-APPipelineApproval -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion -ApprovalId $PSitem.approvalId -status 'approved'}
Code snippet with AzurePipelinesPS sessions
$session = 'your_session'
Get-APPipelinePendingApprovalList $session | Out-GridView -Passthru | % { Update-APPipelineApproval $session -ApprovalId $PSitem.approvalId -status 'approved'}
See the AzurePipelinesPS project page for details on secure session handling.
Function Definitions used in the code above
Get-APPipelinePendingApprovalList
Loops through pipeline build runs with the status of 'notStarted' or 'inProgress' in a project. This build lookup supports filters like pipeline definition ids or a source branch name.
For each build it then looks up the timeline record where the approval id, the stage name and the stage identifier are found.
Optionally with the ExpandApproval switch it can expand each approval with details
The object returned from this function contains the following properties, the values have been mocked
pipelineDefinitionName : MyPipeline
pipelineDefinitionId : 100
pipelineRunId : 2001
pipelineUrl : https://dev.azure.com/your_project/_build/results?
sourceBranch : refs/heads/master
stageName : Prod Deployment
stageIdentifier : Prod_Deployment
approvalId : xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
Out-GridView
Displays data in a Grid View where the results can be filtered, ordered and selected.
%
The percent sign is shorthand for Foreach-Object
Update-APPipelineApproval
Updates the status of an approval to approved or rejected.
Credit
Thanks to Sebastian Schütze for cracking the timeline part!
The az pipelines extension doesn't suport approvals yet, I suppose due to the fact that multi-stage pipelines are still in preview and the old release hub will eventually be replaced by it.
But there is a REST API you can use to list and update approvals. These can be called from PowerShell with relative ease.
Or use the vsteam powershell module and Get-VSTeamApproval and Set-VSTeamApproval.

Facebook Marketing API (#2654) Account Request Limits Reached, Sub Error Code:1870024

We've been using Facebook Marketing API v4.0. For an account, I keep getting error below. We cannot find any info about this sub error on Facebook docs. Our client deleted some Custom Audiences in their account but nothing changed.
{"error":
{"message":"(#2654) Account Request Limits Reached: You've reached the total number of times
you can create a Custom Audience through one or more ad accounts in this business.",
"type":"OAuthException",
"code":2654,
"error_subcode":1870024,
"fbtrace_id":"AmN2PwJB8utcYZCXu2y-9TF"}
}
You can check in the Changelog of the v4.0 API under Breaking Changes
- Ads Management here:
Updated the the rate limit for several areas under Marketing API. This includes:
custom_audience - Per each ad account in a one-hour time period:
Standard Tier Apps: Minimum of 190000 + 40 * Number of Active custom audiences. Maximum of 700000.
Dev Tier Apps: Minimum of 5000 + 40 * Number of Active custom audiences. Maximum of 700000.
You can also check the Business Use Case Rate Limits in the header of the response X-Business-Use-Case-Usage as described here, like:
x-business-use-case-usage: {
"{business-object-id}": [
{
"type": "{rate-limit-type}", //Type of BUC rate limit logic being applied.
"call_count": 100, //Percentage of calls made.
"total_cputime": 25, //Percentage of the total CPU time that has been used.
"total_time": 25, //Percentage of the total time that has been used.
"estimated_time_to_regain_access": 19 //Time in minutes to regain access.
}
],
"66782684": [
{
"type": "ads_management",
"call_count": 95,
"total_cputime": 20,
"total_time": 20,
"estimated_time_to_regain_access": 0
}
],
"10153848260347724": [
{
"type": "ads_management",
"call_count": 97,
"total_cputime": 23,
"total_time": 23,
"estimated_time_to_regain_access": 0
}
],
...
}
Hope this help

Drools stateful session per request

We are trying to use Drool as our rule engine service. What we done till now is listed below
Deployed workbench 7.2.Final
Deployed KIE server 7.2.0.Final
Configured some data objects, rules, deployed the changes to KIE server and we are able to execute the rule using rest API
Most of our requirements satisfied by stateless session (Give a set of data, execute the rule and return the data, that's it) . But using stateless we have to compromise many of the important features provided by Drools stateful session.
So we are trying to use stateful session per request. Which means the session should get disposed as soon as the request end. Also, parallel request should not interfere each other even if the session name is same
We found about container runtime strategy configuration (Workbench > Deploy > {any container} > Process Configuration > Runtime strategy)
But even after configure the container strategy to Per Request, it still behave same as Singleton (the session is not getting disposed after each request)
Few place we read it as, run time strategy only implemented in jBPM
The way we make request to KIE server is shown below
Request: POST {HOST}/kie-server/services/rest/server/containers/instances/TestRequest_1.0.4
{
"lookup": "ab-session", //stateful session
"commands": [
{
"insert": {
"out-identifier": "125",
"object": {
"com.myteam.testrequest.Product": {
"id": "123",
"name": "Hoo Hoo",
"count": 0
}
},
"return-object": "true"
}
},
{
"insert": {
"out-identifier": "126",
"object": {
"com.myteam.testrequest.Product": {
"id": "123",
"name": "Hoo Hoo",
"count": 0
}
},
"return-object": "true"
}
},
{"fire-all-rules": "hf2"}
]
}
We need help in achieving this requirement. Also, please help understand if we done something wrong
In kmodule.xml you may try to add "prototype" scope, because default is "singleton":
<ksession name="SessionName" type="stateful" default="false" clockType="realtime" scope="prototype"/>