Get run id after triggering a github workflow dispatch event - github

I am triggering a workflow run via github's rest api. But github doesn't send any data in the response body (204).
How do i get the run id of the trigger request made?
I know about the getRunsList api, which would return runs for a workflow id, then i can get the latest run, but this can cause issues when two requests are submitted at almost the same time.

This is not currently possible to get the run_id associated to the dispatch API call in the dispatch response itself, but there is a way to find this out if you can edit your worflow file a little.
You need to dispatch the workflow with an input like this:
curl "https://api.github.com/repos/$OWNER/$REPO/actions/workflows/$WORKFLOW/dispatches" -s \
-H "Authorization: Token $TOKEN" \
-d '{
"ref":"master",
"inputs":{
"id":"12345678"
}
}'
Also edit your workflow yaml file with an optionnal input (named id here). Also, place it as the first job, a job which has a single step with the same name as the input id value (this is how we will get the id back using the API!):
name: ID Example
on:
workflow_dispatch:
inputs:
id:
description: 'run identifier'
required: false
jobs:
id:
name: Workflow ID Provider
runs-on: ubuntu-latest
steps:
- name: ${{github.event.inputs.id}}
run: echo run identifier ${{ inputs.id }}
The trick here is to use name: ${{github.event.inputs.id}}
https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#inputs
Then the flow is the following:
run the dispatch API call along with the input named id in this case with a random value
POST https://api.github.com/repos/$OWNER/$REPO/actions/workflows/$WORKFLOW/dispatches
in a loop get the runs that have been created since now minus 5 minutes (the delta is to avoid any issue with timings):
GET https://api.github.com/repos/$OWNER/$REPO/actions/runs?created=>$run_date_filter
example
in the run API response, you will get a jobs_url that you will call:
GET https://api.github.com/repos/$OWNER/$REPO/actions/runs/[RUN_ID]/jobs
the job API call above returns the list of jobs, as you have declared the id jobs as 1st job it will be in first position. It also gives you the steps with the name of the steps. Something like this:
{
"id": 3840520726,
"run_id": 1321007088,
"run_url": "https://api.github.com/repos/$OWNER/$REPO/actions/runs/1321007088",
"run_attempt": 1,
"node_id": "CR_kwDOEi1ZxM7k6bIW",
"head_sha": "4687a9bb5090b0aadddb69cc335b7d9e80a1601d",
"url": "https://api.github.com/repos/$OWNER/$REPO/actions/jobs/3840520726",
"html_url": "https://github.com/$OWNER/$REPO/runs/3840520726",
"status": "completed",
"conclusion": "success",
"started_at": "2021-10-08T15:54:40Z",
"completed_at": "2021-10-08T15:54:43Z",
"name": "Hello world",
"steps": [
{
"name": "Set up job",
"status": "completed",
"conclusion": "success",
"number": 1,
"started_at": "2021-10-08T17:54:40.000+02:00",
"completed_at": "2021-10-08T17:54:42.000+02:00"
},
{
"name": "12345678", <=============== HERE
"status": "completed",
"conclusion": "success",
"number": 2,
"started_at": "2021-10-08T17:54:42.000+02:00",
"completed_at": "2021-10-08T17:54:43.000+02:00"
},
{
"name": "Complete job",
"status": "completed",
"conclusion": "success",
"number": 3,
"started_at": "2021-10-08T17:54:43.000+02:00",
"completed_at": "2021-10-08T17:54:43.000+02:00"
}
],
"check_run_url": "https://api.github.com/repos/$OWNER/$REPO/check-runs/3840520726",
"labels": [
"ubuntu-latest"
],
"runner_id": 1,
"runner_name": "Hosted Agent",
"runner_group_id": 2,
"runner_group_name": "GitHub Actions"
}
The name of the id step is returning your input value, so you can safely confirm that it is this run that was triggered by your dispatch call
Here is an implementation of this flow in python, it will return the workflow run id:
import random
import string
import datetime
import requests
import time
# edit the following variables
owner = "YOUR_ORG"
repo = "YOUR_REPO"
workflow = "dispatch.yaml"
token = "YOUR_TOKEN"
authHeader = { "Authorization": f"Token {token}" }
# generate a random id
run_identifier = ''.join(random.choices(string.ascii_uppercase + string.digits, k=15))
# filter runs that were created after this date minus 5 minutes
delta_time = datetime.timedelta(minutes=5)
run_date_filter = (datetime.datetime.utcnow()-delta_time).strftime("%Y-%m-%dT%H:%M")
r = requests.post(f"https://api.github.com/repos/{owner}/{repo}/actions/workflows/{workflow}/dispatches",
headers= authHeader,
json= {
"ref":"master",
"inputs":{
"id": run_identifier
}
})
print(f"dispatch workflow status: {r.status_code} | workflow identifier: {run_identifier}")
workflow_id = ""
while workflow_id == "":
r = requests.get(f"https://api.github.com/repos/{owner}/{repo}/actions/runs?created=%3E{run_date_filter}",
headers = authHeader)
runs = r.json()["workflow_runs"]
if len(runs) > 0:
for workflow in runs:
jobs_url = workflow["jobs_url"]
print(f"get jobs_url {jobs_url}")
r = requests.get(jobs_url, headers= authHeader)
jobs = r.json()["jobs"]
if len(jobs) > 0:
# we only take the first job, edit this if you need multiple jobs
job = jobs[0]
steps = job["steps"]
if len(steps) >= 2:
second_step = steps[1] # if you have position the run_identifier step at 1st position
if second_step["name"] == run_identifier:
workflow_id = job["run_id"]
else:
print("waiting for steps to be executed...")
time.sleep(3)
else:
print("waiting for jobs to popup...")
time.sleep(3)
else:
print("waiting for workflows to popup...")
time.sleep(3)
print(f"workflow_id: {workflow_id}")
gist link
Sample output
$ python3 github_action_dispatch_runid.py
dispatch workflow status: 204 | workflow identifier: Z7YPF6DD1YP2PTM
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs
waiting for steps to be executed...
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs
waiting for steps to be executed...
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs
waiting for steps to be executed...
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321475221/jobs
get jobs_url https://api.github.com/repos/OWNER/REPO/actions/runs/1321463229/jobs
workflow_id: 1321475221
It would have been easier if there was a way to retrieve the workflow inputs via API but there is no way to do this at this moment
Note that in the worflow file, I use ${{github.event.inputs.id}} because ${{inputs.id}} doesn't work. It seems inputs is not being evaluated when we use it as the step name

Get WORKFLOWID
gh workflow list --repo <repo-name>
Trigger workflow of type workflow_dispatch
gh workflow run $WORKFLOWID --repo <repo-name>
It doesnot return the run-id which is required get the status of execution
Get latest run-id WORKFLOW_RUNID
gh run list -w $WORKFLOWID --repo <repo> -L 1 --json databaseId | jq '.[]| .databaseId'
Get workflow run details
gh run view --repo <repo> $WORKFLOW_RUNID
This is workaround that we do. It is not perfect, but should work.

inspired by the comment above, made a /bin/bash script which gets your $run_id
name: ID Example
on:
workflow_dispatch:
inputs:
id:
description: 'run identifier'
required: false
jobs:
id:
name: Workflow ID Provider
runs-on: ubuntu-latest
steps:
- name: ${{github.event.inputs.id}}
run: echo run identifier ${{ inputs.id }}
workflow_id= generates a random 8 digit number
now, later, date_filter= use for time filter, now - 5 minutes \
generates a random ID
POST job and trigger workflow
GET action/runs descending and gets first .workflow_run[].id
keeps looping until script matches random ID from step 1
echo run_id
TOKEN="" \
GH_USER="" \
REPO="" \
REF=""
WORKFLOW_ID=$(tr -dc '0-9' </dev/urandom | head -c 8) \
NOW=$(date +"%Y-%m-%dT%H:%M") \
LATER=$(date -d "-5 minutes" +"%Y-%m-%dT%H:%M") \
DATE_FILTER=$(echo "$NOW-$LATER") \
JSON=$(cat <<-EOF
{"ref":"$REF","inputs":{"id":"$WORKFLOW_ID"}}
EOF
) && \
curl -s \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $TOKEN" \
"https://api.github.com/repos/$GH_USER/$REPO/actions/workflows/main.yml/dispatches" \
-d $JSON && \
INFO="null" \
COUNT=1 \
ATTEMPTS=10 && \
until [[ $CHECK -eq $WORKFLOW_ID ]] || [[ $COUNT -eq $ATTEMPTS ]];do
echo -e "$(( COUNT++ ))..."
INFO=$(curl -s \
-X GET \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $TOKEN" \
"https://api.github.com/repos/$GH_USER/$REPO/actions/runs?created:<$DATE_FILTER" | jq -r '.workflow_runs[].id' | grep -m1 "")
CHECK=$(curl -s \
-X GET \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $TOKEN" \
"https://api.github.com/repos/$GH_USER/$REPO/actions/runs/$INFO/jobs" | jq -r '.jobs[].steps[].name' | grep -o '[[:digit:]]*')
sleep 5s
done
echo "Your run_id is $CHECK"
output:
1...
2...
3...
Your run_id is 67530050

I recommend using the convictional/trigger-workflow-and-wait action:
- name: Example
uses: convictional/trigger-workflow-and-wait#v1.6.5
with:
owner: my-org
repo: other-repo
workflow_file_name: other-workflow.yaml
github_token: ${{ secrets.GH_TOKEN }}
client_payload: '{"key1": "value1", "key2": "value2"}'
This takes care of waiting for the other job and returning a success or failure according to whether the other workflow succeeded or failed. It does so in a robust way that handles multiple runs being triggered at almost the same time.

the whole idea is to know which run was dispatched, when id was suggested to use on dispatch, this id is expected to be found in the response of the GET call to this url "actions/runs" so now user is able to identify the proper run to monitor. The injected id is not part of the response, so extracting another url to find your id is not helpful since this is the point where id needed to identify the run for monitoring

Related

How to refer environment variable inside GitHub workflow?

I'm trying to read the the GitHub environment variable inside the request json payload while making the curl request but somewhat these variables are not resolving and it gives an error the values I'm trying to read are KEY_VAULT and ACR_PATH:SNAPSHOT_VERSION inside the flow create container web. I've attached the GitHub workflow sample below.
name: Pull Request
on:
pull_request:
types: [review_requested]
env:
KEY_VAULT: "some vault"
SNAPSHOT_VERSION: ${{ format('{0}-SNAPSHOT', github.event.number) }}
GITHUB_ISSUE_NUMBER: ${{ github.event.number }}
GITHUB_REPO: ${{ github.event.repository.name }}
DEPLOYMENT_NOTIFICATION_URL_TOKEN: ${{ secrets.SOME_TOKEN }}
DEPLOYMENT_URL_TOKEN: 123
ENVIRONMENT: sandbox
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
fetch-depth: 0
- name: Docker login
#if: steps.pr-label.outputs.result == 'true'
uses: azure/docker-login#v1
with:
login-server: acr-login.com
username: user
password: pwd
- name: Publish Snapshot To ACR
#if: steps.pr-label.outputs.result == 'true'
run: |
echo steps.pr-label.outputs.result
echo Publishing to $ACR_PATH:$SNAPSHOT_VERSION
docker build . -t $ACR_PATH:$SNAPSHOT_VERSION
docker push $ACR_PATH:$SNAPSHOT_VERSION
- name: Create Container Web
#if: steps.pr-label.outputs.result == 'true'
run: |
AUTH_HEADER="Authorization: token $DEPLOYMENT_URL_TOKEN"
CONTAINER_WEB_NAME="CONATINER"
PROJECT_NAME="tirumalesh-automate"
REGION="US"
URL="https://abcd.com/$REGION/$PROJECT_NAME/container-web/$CONTAINER_WEB_NAME"
PAYLOAD='{
"spec": {
"image": "${{env.ACR_PATH}}:${{env.SNAPSHOT_VERSION}}",
"secrets": {
"key_vaults": [
{
"name": "${{env.KEY_VAULT}}",
"secrets": [
{
"name": "mysql-pwd",
"environment_variable": "mysql_pwd"
},
]
}
],
},
}
}'
curl --location --request PUT 'https://abcd/us/projects/tirumalesh-automate/resources/container-web/configuration-service' \
--header "$AUTH_HEADER" \
--header 'Content-Type: application/json' \
--data-raw "$PAYLOAD"
You have single quotes around PAYLOAD, which means that it will take the string literally and not expand anything.
Use double quotes and escape the quotes.
PAYLOAD="{
\"spec\": {
\"image\": \"${{env.ACR_PATH}}:${{env.SNAPSHOT_VERSION}}\",
\"secrets\": {
\"key_vaults\": [
{
\"name\": \"${{env.KEY_VAULT}}\",
\"secrets\": [
{
\"name\": \"mysql-pwd\",
\"environment_variable\": \"mysql_pwd\"
},
]
}
],
},
}
}"
curl --location --request PUT 'https://abcd/us/projects/tirumalesh-automate/resources/container-web/configuration-service' \
--header "$AUTH_HEADER" \
--header 'Content-Type: application/json' \
--data-raw "$PAYLOAD"

How to create a tag for specific branch using Jenkinsfile

I have a scenario i need to build and push docker file only when tag is build from master branch.
stage('Tag') {
when {
expression { sh([returnStdout: true, script: 'echo $TAG_NAME | tr -d \'\n\'']) }
}
steps {
script {
echo "tag"
}
}
}
The above code will work.. If at all we create a tag for the develop branch or the test branch even though when condition is getting satisfied. can anyone help me how to over come this issue.
If it is of any help, this is an except of a sequential Jenkinsfile I use to tag branches on repos. It is a two stage process - find the current commit for the branch and then tag that:
steps.withCredentials([steps.usernamePassword(credentialsId: httpCredentials,
usernameVariable: 'GITHUB_USERNAME', passwordVariable: 'GITHUB_TOKEN')]) {
// Need to do this as a two stage activity (need commit sha) but *warning* this api assumes ref is a branch
def json_info = steps.sh(script: "curl -X GET -H \"Authorization: token \$GITHUB_TOKEN\" -H \"Accept: application/vnd.github.v3+json\" ${httpUrlForHost}/repos/${slugForRepo}/branches/${ref}",
returnStdout: true)
def branch_info = steps.readJSON text: json_info
def sha = branch_info?.commit?.sha
if (!sha) {
steps.error("Unexpected sha for branch ${ref} (${json_info})")
}
def jsonPayload = "{\"ref\":\"refs/tags/${new_tag}\",\"sha\":\"${sha}\"}"
steps.sh "curl -X POST -H \"Authorization: token \$GITHUB_TOKEN\" -H \"Accept: application/vnd.github.v3+json\" -d '${jsonPayload}' ${httpUrlForHost}/repos/${slugForRepo}/git/refs"
}

Partition JSON data using jq and then send Rest query

I have a json file like below
[
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
}
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
}
{
"field": {
"empID": "sapid",
"location": "India",
}
}
.... upto 1 million
]
I have to use this json as an input for a rest request For example
curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d #temp.json
My server will not accept 1 million json object at a time.
I am looking for an approach where i have to extract the first 500 objects from the main json and send it in one rest query and then next 500 object in second rest query and so on.
Can you please suggest how can i achieve this by jq?
There's an intrinsic tradeoff here between space and time efficiency. In the following, the focus is on the latter.
Assuming that each call to curl must send a JSON array, a time-efficient solution can be constructed along the following lines:
< array.json jq -c '
def batch($n): length as $l | range(0;length;$n) as $i | .[$i: $i+n];
batch(500)
' | while read -r json
do
echo "$json" | curl -X POST -H "Content-Type: application/json" -d -# ....
done
Here .... signifies additional appropriate curl arguments.
GNU parallel
You might also want to consider using GNU parallel, e.g.:
< array.json jq -c '
def batch($n):
length as $l
| range(0;length;$n) as $i
| .[$i: $i+n];
batch(500)
' | parallel --pipe -N1 curl -X POST -H "Content-Type: application/json" -d #- ....
You have not shared any HW information of the system you are running this on. At the minimum you need to do some sort of multiprocessing to make this faster instead of running (1000000/500) curl requests altogether.
One way, would be to use GNU xargs which has a built-in to run number of parallel instances of a given process using the -P flag and number of lines of input to read from at any time with the L flag.
To start with you can do something like below to instruct curl to run on 500 lines at a time and invoke 20 such invocations in parallel. So at a given tick, approximately (500 *20) lines of input are processed. You can tune the numbers depending on your HW capability both on the host and the server side.
xargs -L 500 -P 20 curl -X POST -H "Content-Type: application/json" http://sample-url -d #- < <(jq -c 'range(0;length;500) as $i | .[$i: $i+500]' json)
Modified jq filter to pack the JSON payload as an array of objects (credit peak's answer). The earlier version jq -c '.[]' json might not work as the individual chunk of lines passed at a time doesn't represent a valid JSON.
Note: Not tested due to performance constraints.
Assuming you have this formatting, splitting can be done by unpacking the array and saving the desired number of objects to separate files, e.g.:
<input.json jq -c '.[]' | split -l500
Which creates xaa with the first 500 objects, xab with the next 500 objects, etc. If you want to repackage the objects in an array, use the -s option to jq, e.g.: jq -s . xaa.
If you want to do this from the shell, you could use jq to split your JSON and pass it to xargs to call curl for each object returned.
jq -c '.[]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
This will send one curl request for each object. However, if you e.g. want to send the first 500 objects in a single curl request, you can specify a subarray in the jq filter. To send all of your JSON objects you will then somehow need to repeat the command, as afaik jq has no built-in way to split the input into chunks of objects.
jq -c '.[0:500]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
jq -c '.[500:1000]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
jq -c '.[1000:1500]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
[...]

How to create a folder & a project under it with Deployment Manager (Google Cloud Platform)

In a single Deployment Manager template, how do I create a new folder and a new project underneath it? The problem is that the reference to the folder includes a name in the format folders/123456, but the project requires a parent field in the format {'type': 'folder', 'id': 123456}. Using a $(ref.new-folder.name) won't work for the ID field in the parent record for the new project.
It feels like I need to do string manipulation on the $(ref.new-folder.name) like this:
# DOES NOT WORK
# but if it did, I could extract the numeric id from 'folders/123456'
parent_id = '$(ref.new-folder.name)'.replace('folders/', '')
But, of course, that won't work.
Here is my (non-working) attempt:
# Template for new folder & new project
folder_resource = {
'name': 'new-folder',
'type': 'gcp-types/cloudresourcemanager-v2:folders',
'properties': {
'parent': 'organizations/99999',
'displayName': 'new-folder'
}
}
project_resource = {
'name': 'new-project',
'type': 'clouresourcemanager.v1.project',
'metadata': { 'dependsOn': ['new-folder'] },
'properties': {
'name': 'new-project',
'parent': {
'type': 'folder',
# HERE it is -- the problem!
'id': '$(ref.new-folder.name)'
}
}
}
return { 'resources': [folder_resource, project_resource] }
So, to reiterate, I'm getting hung-up on extracting the numeric folder id from the reference to the folder's name. The name is in the format folders/123456 but I just need the 123456 part to use in the parent field for the new project.
This question is specific to folder & project creation, but a more generalized question would be: is there a way to do string-manipulation on the value of references?
For creating and managing folders document [a] might be helpful and folder name must meet the following requirements:
The name may contain letters, digits, spaces, hyphens and underscores.
The folder's display name must start and end with a letter or digit.
The name must be 30 characters or less.
The name must be distinct from all other folders that share its parent.
To create a folder:
Folders can be created with an API request.
The request JSON:
request_json= '{
display_name: "[DISPLAY_NAME]"
}'
The Create Folder curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v2/folders?parent=[ORGANIZATION_NAME]
Where:
-[DISPLAY_NAME] is the new folder's display name, for example "My Awesome Folder."
-[ORGANIZATION_NAME] is the name of the organization under which you're creating the
folder, for example organizations/123.
The Create Folder response:
{
"name": "operations/fc.123456789",
"metadata": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.FolderOperation",
"displayName": "[DISPLAY_NAME]",
"operationType": "CREATE"
}
}
The Get Operation curl request:
curl -H "Authorization: Bearer ${bearer_token}" \
https://cloudresourcemanager.googleapis.com/v1/operations/fc.123456789
The Get Operation response:
{
"name": "operations/fc.123456789",
"metadata": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.FolderOperation",
"displayName": "[DISPLAY_NAME]",
"operationType": "CREATE"
},
"done": true,
"response": {
"#type": "type.googleapis.com/google.cloud.resourcemanager.v2.Folder",
"name": "folders/12345",
"parent": "organizations/123",
"displayName": "[DISPLAY_NAME]",
"lifecycleState": "ACTIVE",
"createTime": "2017-07-19T23:29:26.018Z",
"updateTime": "2017-07-19T23:29:26.046Z"
}
}
Configuring access to folders
SetsIamPolicy sets the access control policy on a folder, replacing any existing policy. The resource field should be the folder's resource name, for example, folders/1234.
request_json= '{
policy: {
version: "1",
bindings: [
{
role: "roles/resourcemanager.folderEditor",
members: [
"user:email1#example.com",
"user:email2#example.com",
]
}
]
}
}'
The curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v2/[FOLDER_NAME]:setIamPolicy
Where:
-[FOLDER_NAME] is the name of the folder whose IAM policy is being set, for example folders/123.
Creating a project in a folder
request_json= ‘{
name: “[DISPLAY_NAME]”, projectId: “[PROJECT_ID]”, parent: {id: [PARENT_ID], type: [PARENT_TYPE] }
}’
The curl request:
curl -X POST -H "Content-Type: application/json" \
-H "Authorization: Bearer ${bearer_token}" \
-d "$request_json" \
https://cloudresourcemanager.googleapis.com/v1/projects
Where:
-[PROJECT_ID] is id of the project being created, for e.g., my-awesome-proj-123.
-[DISPLAY_NAME] is the display name of the project being created.
-[PARENT_ID] is the id of the parent being created under, for e.g. 123
-[PARENT_TYPE] is the type of the parent, like “folder” or “Organization”
When we create a reference to a resource, we also create a dependency between resources, document [b] might be helpful for this.
[a]-https://cloud.google.com/resource-manager/docs/creating-managing-folders
[b]-https://cloud.google.com/deployment-manager/docs/configuration/use-references

How to update the origination_urls when creating a new Trunk using twilio API

Thanks to this tutorial: https://www.twilio.com/docs/sip-trunking/api/trunks#action-create I am able to CRUD create, read, update and delete trunks on my Twilio account.
To create a new trunk I do it like so:
curl -XPOST https://trunking.twilio.com/v1/Trunks \
-d "FriendlyName=MyTrunk" \
-u '{twilio account sid}:{twilio auth token}'
and this is the response I get when creating a new trunk:
{
"trunks": [
{
"sid": "TKfa1e5a85f63bfc475c2c753c0f289932",
"account_sid": "ACxxx",
....
....
"date_updated": "2015-09-02T23:23:11Z",
"url": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932",
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
"credential_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/CredentialLists",
"ip_access_control_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/IpAccessControlLists",
"phone_numbers": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/PhoneNumbers"
}
}],
"meta": {
"page": 0,
"page_size": 50,
... more
}
}
What I am interested from the response is:
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
Now if I perform a get command on that link like:
curl -G "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls" -u '{twilio account sid}:{twilio auth token}'
I get back this:
{
"meta":
{
"page": 0,
"page_size": 50,
"first_page_url":
....
},
"origination_urls": []
}
Now my goal is to update the origination_urls. So using the same approach I used to update a trunk I have tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=sip:200#somedomain.com" \
-u '{twilio account sid}:{twilio auth token}'
But that fails. I have also tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=['someUrl']" \
-u '{twilio account sid}:{twilio auth token}'
and that fails too. How can I update the origination_urls?
I was missing to add Priority, FriendlyName, SipUrl, Weight and Enabled on my post request. I finally got it to work by doing:
curl -XPOST "https://trunking.twilio.com/v1/Trunks/TKfae10...../OriginationUrls" -d "Priority=10" -d "FriendlyName=Org1" -d "Enabled=true" -d "SipUrl=sip:test#domain.com" -d "Weight=10" -u '{twilio account sid}:{twilio auth token}'