When through the documentation of pagerduty was but still not able to understand what parameters to send in the request body and also facing trouble in understanding how to make the api request.If any one can share the sample code on making a pagerduty schedule that would help me alot.
Below is the sample code to create schedules in PagerDuty.
Each list can have multiple items (to add more users / layers)
import requests
url = "https://api.pagerduty.com/schedules?overflow=false"
payload={
"schedule": {
"schedule_layers": [
{
"start": "<dateTime>", # Start Time of layer | "start": "2021-01-01T00:00:00+05:30",
"users": [
{
"user": {
"id": "<string>", # ID of user to add in layer
"summary": "<string>",
"type": "<string>", # "type": "user"
"self": "<url>",
"html_url": "<url>"
}
}
],
"rotation_virtual_start": "<dateTime>", # Start of layer | "rotation_virtual_start": "2021-01-01T00:00:00+05:30",
"rotation_turn_length_seconds": "<integer>", # Layer rotation, for multiple user switching | "rotation_turn_length_seconds": <seconds>,
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
"end": "<dateTime>", # End Time of layer | "end": "2021-01-01T00:00:00+05:30",
"restrictions": [
{
"type": "<string>", # To restrict shift to certain timings Weekly daily etc | "type": "daily_restriction",
"duration_seconds": "<integer>", # Duration of layer | "duration_seconds": "300"
"start_time_of_day": "<partial-time>", #Start time of layer | "start_time_of_day": "00:00:00",
"start_day_of_week": "<integer>"
}
],
"name": "<string>", # Name to give Layer
}
]
"time_zone": "<activesupport-time-zone>", # Timezone to set for layer and its timings | "time_zone": "Asia/Kolkata",
"type": "schedule",
"name": "<string>", # Name to give Schedule
"description": "<string>",# Description to give Schedule
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
}
}
headers = {
'Authorization': 'Token token=<Your token here>',
'Accept': 'application/vnd.pagerduty+json;version=2',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, json=payload)
print(response.text)
Best way to do this is to get the postman collection for PagerDuty and edit the request as per your liking. Once you get a successful response, convert that into code using the inbuilt feature of postman.
Using PagerDuty API for scheduling is not easy. Creating new schedule is okaish, but if you decide to update schedule - it is definitely not trivial. You'll probably occur bunch of limitation: number of restriction per layer, must reuse current layers, etc.
As option you can use a python library pdscheduling https://github.com/skrypka/pdscheduling
Related
I would like to use Azure Speech Services Batch Transcription APIs to create a transcription of my audio file. I've already had success using the Speech Service SDK (for Node.js), but was interested in trying out one of the newer features available in v3.1 preview version of the api (displayFormWordLevelTimestampsEnabled), so I figured I had to do use the REST API service to do that.
Overall my problem is that for whatever input I've feed the Create Transcript API for contentUrls, I always end up getting the same error:
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
After a little digging, I found some tips through the Azure portal to use sox to handle transcoding the audio file in the specific format requested.
The specific format they mention in the portal documentation shows:
If you are using REST API, make sure that it uses one of the formats in this table:
Format
Codec
Bit rate
Sample Rate
WAV
PCM
256 kbps
16 kHz, mono
OGG
OPUS
256 kpbs
16 kHz, mono
With the sox specific commands being:
Activity
SoX command
Check the audio file format.
sox --i
Convert the audio file to single channel, 16-bit, 16 KHz.
sox -b 16 -e signed-integer -c 1 -r 16k -t wav .wav
I ran my mp3 through the second command and verified the file with the first, and the contents of the file looks like:
Input File : 'out5.wav'
Channels : 1
Sample Rate : 16000
Precision : 16-bit
Duration : 00:00:30.09 = 481488 samples ~ 2256.97 CDDA sectors
File Size : 963k
Bit Rate : 256k
Sample Encoding: 16-bit Signed Integer PCM
Finally, I uploaded the file to a public S3 bucket, to use as my content url for my request:
POST https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions
{
"contentUrls": [
"https://s3.us-west-1.amazonaws.com/xxxx/out5.wav"
],
"locale": "en-US",
"displayName": "Test"
}
Still it failed with the same error that I posted above. Any insights into what might be wrong? Thanks!
Update:
The answer below mentioned being able to reference a reports.json file on the Get Transcript/Create Transcript api call.
When I use the Create Transcript API my payload is:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked"
},
"lastActionDateTime": "2022-09-13T23:37:09Z",
"status": "NotStarted",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
Calling the Get Transcript I see:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked",
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
},
"lastActionDateTime": "2022-09-13T23:37:22Z",
"status": "Failed",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
And finally looking at the transcript files I'm getting an empty list:
{
"values": []
}
I see no reference to a reports.json, or any data populated here at all.
In many cases you can get a detailed error information by doing a GET on https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/<transcription_id>/files and looking at the report.json that is referenced there.
If that doesn't help, you could post transcription id(s) of failed transcription so someone from the team (I am one of them) can look at the service logs.
I use a custom logger to log who is currently doing any kind of stuff in Jupyterhub.
logging_config: dict = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"company": {
"()": lambda: MyFormatter(user=os.environ.get("JUPYTERHUB_USER", "Unknown"))
},
},
....
c.Application.logging_config = logging_config
Output:
{"asctime": "2022-06-29 14:13:43,773", "level": "WARNING", "name": "JupyterHub", "message": "Updating Hub route http://127.0.0.1:8081 \u2192 http://jupyterhub:8081", "user": "Unknown"
The logger itself works fine, but I am not able to log who was performing the action. In the Image I start, there is a JUPYTERHUB_USER env variable available. This seems to get passed from JupyterHub ( I don´t know how this is done exactly). But in JupyterHub I don´t have this variable available.
Is there a way to use it in JupyterHub, not just in the jupyterLab container?
This doesn't get you all the way there but it's a start - we add extra pod annotations/labels through KubeSpawner's extra_annotations using the cluster_options hook (see our helm chart for our complete daskhub setup):
dask-gateway:
gateway:
extraConfig:
optionHandler: |
from dask_gateway_server.options import Options, String, Select, Mapping, Float, Bool
from math import ceil
def cluster_options(user):
def option_handler(options):
extra_annotations = {
"hub.jupyter.org/username": user.name
}
default_extra_labels = {
"hub.jupyter.org/username": user.name,
}
return Options(
Select(
...
),
...,
handler=option_handler,
)
c.Backend.cluster_options = cluster_options
You can then poll pods with these labels to get real time usage. There may be a more direct way to do this though - not sure.
I have written several Azure Functions over the past year in both powershell and C#. I am currently writing an API that extracts rows from a Storage Account Table and returns that data in a JSON format.
The data pulls fine.
The data converts to JSON just fine.
A JSON formatted response is displayed - which is fine - but the Push-OutputBinding shoves in additional data to my original JSON data - account information, environment information, subscription information, and tenant information.
I've tried a number of different strategies for getting past this. I gave up on using C# to interact with the Tables because the whole Azure.Data.Tables and Cosmos tables packages are a hot mess with breaking changes and package conflicts and .Net 6 requirements for new functions apps. So please don't offer up a C# solution unless you have a working example with specific versions for packages, etc.
Here is the code:
Note that I have verified that $certData and $certJson properly formatted JSON that contain only the data I want to return.
using namespace System.Net
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request."
# Interact with query parameters or the body of the request.
$filter = $Request.Query.Filter
if (-not $filter) {
$filter = "ALL"
}
$certData = GetCerts $filter | ConvertTo-Json
#$certJson = $('{ "CertData":"' + $certData + '" }')
$body = "${CertData}"
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]#{
StatusCode = [HttpStatusCode]::OK
ContentType = "application/json"
Body = $body
})
When I call the httpTrigger function, the response looks like this:
{ "CertData":"[
{
"Name": "MySubscriptionName blah blah",
"Account": {
"Id": "my user id",
"Type": "User",
....
},
"Environment": {
"Name": "AzureCloud",
"Type": "Built-in",
...
},
"Subscription": {
"Id": "SubscriptionID",
"Name": "SubscriptionName",
....
},
"Tenant": {
"Id": "TenandID",
"TenantId": "TenantId",
"ExtendedProperties": "System.Collections.Generic.Dictionary`2[System.String,System.String]",
...
},
"TokenCache": null,
"VersionProfile": null,
"ExtendedProperties": {}
},
{
"AlertFlag": 1,
"CertID": "abc123",
"CertName": "A cert Name",
"CertType": "an assigned cert type",
"DaysToExpire": 666,
"Domain": "WWW.MYDOMAIN.COM",
"Expiration": "2033-10-04T21:31:03Z",
"PrimaryDomain": "WWW.MYDOMAIN.COM",
"ResourceGroup": "RANDOM-RESOURCES",
"ResourceName": "SOMERESOURCE",
"Status": "OK",
"Subscription": "MYSUBSCRIPTIONNAME",
"Thumbprint": "ABC123ABC123ABC123ABC123ABC123",
"PartitionKey": "PARKEY1",
"RowKey": "ID666",
"TableTimestamp": "2022-02-03T09:00:28.7516797-05:00",
"Etag": "W/\"datetime'2022-02-03T14%3A00%3A28.7516797Z'\""
},
...
Not only does the returned values add data I don't want exposed, it makes parsing the return data that I do want to get when I make API calls problematic.
How do I get rid of the data added by the Push-OutputBinding?
Was able to resolve issue by modifying run.ps1 as follows:
using namespace System.Net
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request."
# Interact with query parameters or the body of the request.
$filter = $Request.Query.Filter
if (-not $filter) {
$filter = "ALL"
}
$certData = ( GetCerts $filter | Select-Object -Skip 1 )
#write-information $certData | Format-List
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]#{
StatusCode = [HttpStatusCode]::OK
Body = $certData
})
Is there any way to approve runs via the CLI or the API (or anything else)? I'm looking for a way to bulk approve multiple runs from different pipelines but it's not available in the UI.
Let's say I have 100 pipelines that have a deployment job to a production environment. I would like to approve all awaiting for approval runs.
Currently, I cannot find something like it in the docs of the Azure DevOps REST API or the CLI.
The feature docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals
The following question is related but I'm looking for any way of solving it but not just via API:
Approve a yaml pipeline deployment in Azure DevOps using REST api
I was just searching for an answer for this regarding getting the approval id that you would need. In fact there is an undocumented API to approve an approval check.
This is as Merlin explain the following
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
The body has to look like this
[{
"approvalId": "{approvalId}",
"status": {approvalStatus},
"comment": ""
}]
where {approvalStatus} is telling the API if you approved or not. You probly have to try, but I had a 4 as a status. I guess there are only 2 possibilities. Either for "approved" or "denied".
The question is now how you get the approval ID? I found it. You get it by using the timeline API of a classic build. The build API documentation says that you get it by the following
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}?api-version=5.1
the build timeline you get in the response of the build run, but it has a pattern which is
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/Timeline?api-version=5.1
Besides a flat array container a parent / child rleationship from stage, phase, job and tasks, you can find within it something like the following:
{
"records": [
{
"previousAttempts": [
],
"id": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"parentId": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"type": "Checkpoint",
"name": "Checkpoint",
"startTime": "2020-08-14T13:44:03.05Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 73,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Checkpoint"
},
{
"previousAttempts": [
],
"id": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"parentId": null,
"type": "Stage",
"name": "Power Platform Test (orgf92be262)",
"startTime": null,
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "pending",
"result": null,
"resultCode": null,
"changeId": 1,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"order": 2,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Import_Test"
},
{
"previousAttempts": [
],
"id": "e54149c5-b5a7-4b82-8468-56ad493224b5",
"parentId": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"type": "Checkpoint.Approval",
"name": "Checkpoint.Approval",
"startTime": "2020-08-14T13:44:03.02Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 72,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "e54149c5-b5a7-4b82-8468-56ad493224b5"
}
],
"lastChangedBy": "00000002-0000-8888-8000-000000000000",
"lastChangedOn": "2020-08-14T13:44:03.057Z",
"id": "86fb4204-9c5e-4e72-bdb1-eefe230480ec",
"changeId": 73,
"url": "https://dev.azure.com/***"
}
below you can see a step that is called "Checkpoint.Approval". The id of that step IS the approval Id you need to approve everything. If you want to know from which stage the approval is, then you can follow up the parentIds until the parentId property is null.
This will then be the stage.
With this you can successfully get the approval id and use it to approve with the said
What jessehouwing's guess is correct. Now multi-stage still be in preview, and the corresponding SDK/API/extension hasn't been expanded and provided to public.
You may think that what about not using API. I have checked the corresponding code from our backend, all of operations to multi-stage approval contain one required parameter: approvalId. I'm sure you have known that this value is unique and different approval map with different approvalId value. This means, no matter which method you want to try with, approvalId is the big trouble. And based on my known, until now, there's no any api/SDK, third tool or extension can achieve this value directly.
In addition, for multi-stage YAML, its release process logic is not same with the release that defined with UI. So, all of public APIs which can work with release(UI), are not suitable with the release of multi-stage.
We have one undisclosed api, can get Approval message of multi-stage:
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
You can try with listing approval without specifying approvalId: https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals. And its response message: Query for approvals failed. A minimum of one query parameter is required.\r\nParameter name: queryParameters. This represents you must tell system the specified approval(the big trouble I mentioned previously).
In fact, for why approvalId is a necessary part, it is caused from our backend code structure. I'd suggest you raise suggestion on developing API/SDK for multi-stage here.
I can confirm that Sebastian's answer worked for me, even in Azure DevOps 2020 on-prem.
After retrieving the approvalId from either methods used above (I was specifically using a service hook for my integration), I used the following API PATCH call:
https://dev.azure.com/{organization}/{project}/_apis/pipelines/approvals/?api-version=6.0-preview
and in the body:
[
{
"approvalId": "{approvalId}",
"status": {status integer}, (4 - approved; 8 - rejected)
"comment": ""
}
]
The call is passed with the application/json Content-Type, but in some situations it did not like that I was using the [] brackets, so you will need to work around that, only then will the call work.
I was even able to integrate this call into my custom connector in MS Power Automate
I added support to the latest version of the AzurePipelinesPS Powershell module to support bulk pipeline approvals.
Code snippet without using the AzurePipelinesPS sessions
$instance = 'https://dev.azure.com'
$collection = 'your_project'
$project = 'your_project'
$apiVersion = '5.1-preview'
$securePat = 'your_personal_access_token' | ConvertTo-SecureString -Force -AsPlainText
Get-APPipelinePendingApprovalList -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion | Out-GridView -Passthru | % { Update-APPipelineApproval -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion -ApprovalId $PSitem.approvalId -status 'approved'}
Code snippet with AzurePipelinesPS sessions
$session = 'your_session'
Get-APPipelinePendingApprovalList $session | Out-GridView -Passthru | % { Update-APPipelineApproval $session -ApprovalId $PSitem.approvalId -status 'approved'}
See the AzurePipelinesPS project page for details on secure session handling.
Function Definitions used in the code above
Get-APPipelinePendingApprovalList
Loops through pipeline build runs with the status of 'notStarted' or 'inProgress' in a project. This build lookup supports filters like pipeline definition ids or a source branch name.
For each build it then looks up the timeline record where the approval id, the stage name and the stage identifier are found.
Optionally with the ExpandApproval switch it can expand each approval with details
The object returned from this function contains the following properties, the values have been mocked
pipelineDefinitionName : MyPipeline
pipelineDefinitionId : 100
pipelineRunId : 2001
pipelineUrl : https://dev.azure.com/your_project/_build/results?
sourceBranch : refs/heads/master
stageName : Prod Deployment
stageIdentifier : Prod_Deployment
approvalId : xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
Out-GridView
Displays data in a Grid View where the results can be filtered, ordered and selected.
%
The percent sign is shorthand for Foreach-Object
Update-APPipelineApproval
Updates the status of an approval to approved or rejected.
Credit
Thanks to Sebastian Schütze for cracking the timeline part!
The az pipelines extension doesn't suport approvals yet, I suppose due to the fact that multi-stage pipelines are still in preview and the old release hub will eventually be replaced by it.
But there is a REST API you can use to list and update approvals. These can be called from PowerShell with relative ease.
Or use the vsteam powershell module and Get-VSTeamApproval and Set-VSTeamApproval.
I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}