How to update the origination_urls when creating a new Trunk using twilio API - twilio-api

Thanks to this tutorial: https://www.twilio.com/docs/sip-trunking/api/trunks#action-create I am able to CRUD create, read, update and delete trunks on my Twilio account.
To create a new trunk I do it like so:
curl -XPOST https://trunking.twilio.com/v1/Trunks \
-d "FriendlyName=MyTrunk" \
-u '{twilio account sid}:{twilio auth token}'
and this is the response I get when creating a new trunk:
{
"trunks": [
{
"sid": "TKfa1e5a85f63bfc475c2c753c0f289932",
"account_sid": "ACxxx",
....
....
"date_updated": "2015-09-02T23:23:11Z",
"url": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932",
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
"credential_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/CredentialLists",
"ip_access_control_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/IpAccessControlLists",
"phone_numbers": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/PhoneNumbers"
}
}],
"meta": {
"page": 0,
"page_size": 50,
... more
}
}
What I am interested from the response is:
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
Now if I perform a get command on that link like:
curl -G "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls" -u '{twilio account sid}:{twilio auth token}'
I get back this:
{
"meta":
{
"page": 0,
"page_size": 50,
"first_page_url":
....
},
"origination_urls": []
}
Now my goal is to update the origination_urls. So using the same approach I used to update a trunk I have tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=sip:200#somedomain.com" \
-u '{twilio account sid}:{twilio auth token}'
But that fails. I have also tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=['someUrl']" \
-u '{twilio account sid}:{twilio auth token}'
and that fails too. How can I update the origination_urls?

I was missing to add Priority, FriendlyName, SipUrl, Weight and Enabled on my post request. I finally got it to work by doing:
curl -XPOST "https://trunking.twilio.com/v1/Trunks/TKfae10...../OriginationUrls" -d "Priority=10" -d "FriendlyName=Org1" -d "Enabled=true" -d "SipUrl=sip:test#domain.com" -d "Weight=10" -u '{twilio account sid}:{twilio auth token}'

Related

Partition JSON data using jq and then send Rest query

I have a json file like below
[
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
}
{
"field": {
"empID": "sapid",
"location": "India",
}
},
{
"field": {
"empID": "sapid",
"location": "India",
}
}
{
"field": {
"empID": "sapid",
"location": "India",
}
}
.... upto 1 million
]
I have to use this json as an input for a rest request For example
curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d #temp.json
My server will not accept 1 million json object at a time.
I am looking for an approach where i have to extract the first 500 objects from the main json and send it in one rest query and then next 500 object in second rest query and so on.
Can you please suggest how can i achieve this by jq?
There's an intrinsic tradeoff here between space and time efficiency. In the following, the focus is on the latter.
Assuming that each call to curl must send a JSON array, a time-efficient solution can be constructed along the following lines:
< array.json jq -c '
def batch($n): length as $l | range(0;length;$n) as $i | .[$i: $i+n];
batch(500)
' | while read -r json
do
echo "$json" | curl -X POST -H "Content-Type: application/json" -d -# ....
done
Here .... signifies additional appropriate curl arguments.
GNU parallel
You might also want to consider using GNU parallel, e.g.:
< array.json jq -c '
def batch($n):
length as $l
| range(0;length;$n) as $i
| .[$i: $i+n];
batch(500)
' | parallel --pipe -N1 curl -X POST -H "Content-Type: application/json" -d #- ....
You have not shared any HW information of the system you are running this on. At the minimum you need to do some sort of multiprocessing to make this faster instead of running (1000000/500) curl requests altogether.
One way, would be to use GNU xargs which has a built-in to run number of parallel instances of a given process using the -P flag and number of lines of input to read from at any time with the L flag.
To start with you can do something like below to instruct curl to run on 500 lines at a time and invoke 20 such invocations in parallel. So at a given tick, approximately (500 *20) lines of input are processed. You can tune the numbers depending on your HW capability both on the host and the server side.
xargs -L 500 -P 20 curl -X POST -H "Content-Type: application/json" http://sample-url -d #- < <(jq -c 'range(0;length;500) as $i | .[$i: $i+500]' json)
Modified jq filter to pack the JSON payload as an array of objects (credit peak's answer). The earlier version jq -c '.[]' json might not work as the individual chunk of lines passed at a time doesn't represent a valid JSON.
Note: Not tested due to performance constraints.
Assuming you have this formatting, splitting can be done by unpacking the array and saving the desired number of objects to separate files, e.g.:
<input.json jq -c '.[]' | split -l500
Which creates xaa with the first 500 objects, xab with the next 500 objects, etc. If you want to repackage the objects in an array, use the -s option to jq, e.g.: jq -s . xaa.
If you want to do this from the shell, you could use jq to split your JSON and pass it to xargs to call curl for each object returned.
jq -c '.[]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
This will send one curl request for each object. However, if you e.g. want to send the first 500 objects in a single curl request, you can specify a subarray in the jq filter. To send all of your JSON objects you will then somehow need to repeat the command, as afaik jq has no built-in way to split the input into chunks of objects.
jq -c '.[0:500]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
jq -c '.[500:1000]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
jq -c '.[1000:1500]' temp.json | xargs -I {} curl <REST Server URL with temp.json as input> "Content-Type: application/json" -d '{}'
[...]

How to add the column to Google Sheets using API and provide the name and type of the column in the same call?

So, what I could achieve using the Google Sheets API is being able to create a new column using the following curl based call
curl -v \
-H 'Authorization: Bearer ya29.GlxUB9K_96tyQFyQ64eaYOeImtJt32213zjosf6LW1Inv6MOqQCCodA7CycvL5EFKIpeX4dVEebS4rUl24U1J7euhMjqBZq0QEU7ZK1B64THQXNwBpDvTzoUT9hTRg' \
-H 'Content-Type: application/json' \
-d '{
"requests": [
{
"insertDimension": {
"range": {
"sheetId": 2052094881,
"dimension": "COLUMNS",
"startIndex": 0,
"endIndex": 1
}
}
}
],
}' \
https://sheets.googleapis.com/v4/spreadsheets/1mHrPXQILuprO4NdqTgrVKlGazvvzgCFqIphGdsmptD8:batchUpdate
While this call is useful, it does not completely help. This is because the reason I wanted to add a column was to give a name and type (or format) the column values. But, as per this API, this is what see as an output
Is there a way to create and add name and type to the column in a single API call?
Thanks a lot!

Why does DESCRIBE EXTENDED in Kafka KSQL return error ShowColumns not supported?

I have a simple KTABLE in KSQL called DIMAGE
When I run the following code
{
"ksql": "DESCRIBE EXTENDED DIMAGE ;"
}
I receive the following error
{
"#type": "generic_error",
"error_code": 40000,
"message": "Statement type `io.confluent.ksql.parser.tree.ShowColumns' not supported for this resource",
"stackTrace": []
}
I also receive a similar error message trying to describe a stream. I also receive the same error message if I remove the EXTENDED attribute.
You're using the wrong REST endpoint. If you use query endpoint query you'll get your error:
$ curl -s -X "POST" "http://localhost:8088/query" \
-H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
-d '{
"ksql": "DESCRIBE EXTENDED COMPUTER_T;"
}'
{"#type":"generic_error","error_code":40000,"message":"Statement type `io.confluent.ksql.parser.tree.ShowColumns' not supported for this resource","stackTrace":[]}⏎
If you use the statement endpoint ksql it works fine:
$ curl -s -X "POST" "http://localhost:8088/ksql" \
-H "Content-Type: application/vnd.ksql.v1+json; charset=utf-8" \
-d '{
"ksql": "DESCRIBE EXTENDED COMPUTER_T;"
}'|jq '.'
[
{
"#type": "sourceDescription",
"statementText": "DESCRIBE EXTENDED COMPUTER_T;",
"sourceDescription": {
"name": "COMPUTER_T",
"readQueries": [
{
"sinks": [
"COMP_WATCH_BY_EMP_ID_T"
],
"id": "CTAS_COMP_WATCH_BY_EMP_ID_T_0",
[...]
I've logged #2362 so that we can improve the UX of this.

Parse: Creating a New Class Programmatically

Is it possible to create a new Class programmatically (i.e. not from the dashboard) via any of the API's or the Parse CLI?
The REST API appears to have functionality to fetch, modify and delete individual Schemas (classes) but not to add them. (https://parse.com/docs/rest/guide#schemas).
Hoping for something like the following:
curl -X ADD \
-H "X-Parse-Application-Id: XXXXXX" \
-H "X-Parse-Master-Key: XXXXXXXX" \
-H "Content-Type: application/json" \
https://api.parse.com/1/schemas/City
You seem to have skipped the part which deals with adding schema in the documentation. To create a new class, according to documentation, You use following method in cURL:
curl -X POST \
-H "X-Parse-Application-Id: Your APP Id" \
-H "X-Parse-Master-Key: Your master key" \
-H "Content-Type: application/json" \
-d '
{
"className": "Your Class name goes here",
"fields": {
"Your field name here": {
"type": "Your field's data type e.g. String, Int etc. Add multiple fields if you want"
}
}
}' \
https://api.parse.com/1/schemas/[Your class name]
Or in Python:
import json,httplib
connection = httplib.HTTPSConnection('api.parse.com', 443)
connection.connect()
connection.request('POST', '/1/schemas/Game', json.dumps({
"className":"[Your class name]","fields":{"Your field name":{"type":"your field's data type"} }
}), {
"X-Parse-Application-Id": "7Lo3U5Ei75dragCphTineRMoCfwD7UJjd1apkPKX",
"X-Parse-Master-Key": "ssOXw9z1ni1unx8tW5iuaHCmhIObOn4nSW9GHj5W",
"Content-Type": "application/json"
})
result = json.loads(connection.getresponse().read())
print result

Unable to run sensu check in a docker-compose context

I am dockerizing sensu infrastructure. Everything goes fine except the execution of checks.
I am using docker-compose according to this structure (docker-compose.yml):
sensu-core:
build: sensu-core/
links:
- redis
- rabbitmq
sensors-production:
build: sensors-production/
links:
- rabbitmq
uchiwa:
build: sensu-uchiwa
links:
- sensu-core
ports:
- "3000:3000"
rabbitmq:
build: rabbitmq/
redis:
image: redis
command: redis-server
My rabbitmq Dockerfile is pretty straightforward:
FROM ubuntu:latest
RUN apt-get -y install wget
RUN wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
RUN dpkg -i erlang-solutions_1.0_all.deb
RUN wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
RUN apt-key add rabbitmq-signing-key-public.asc
RUN echo "deb http://www.rabbitmq.com/debian/ testing main" | sudo tee /etc/apt/sources.list.d/rabbitmq.list
RUN apt-get update
RUN apt-get -y install erlang rabbitmq-server
CMD /etc/init.d/rabbitmq-server start && \
rabbitmqctl add_vhost /sensu && \
rabbitmqctl add_user sensu secret && \
rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*" && \
cd /var/log/rabbitmq/ && \
ls -1 * | xargs tail -f
So do the uchiwa Dockerfile:
FROM podbox/sensu
RUN apt-get -y install uchiwa
RUN echo ' \
{ \
"sensu": [ \
{ \
"name": "Sensu", \
"host": "sensu-core", \
"port": 4567, \
"timeout": 5 \
} \
], \
"uchiwa": { \
"host": "0.0.0.0", \
"port": 3000, \
"interval": 5 \
} \
}' > /etc/sensu/uchiwa.json
EXPOSE 3000
CMD /etc/init.d/uchiwa start && \
tail -f /var/log/uchiwa.log
Sensu core runs sensu-server & sensu-api. Here is his dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
}, \
"redis": { \
"host": "redis", \
"port": 6379 \
}, \
"api": { \
"host": "localhost", \
"port": 4567 \
} \
}' >> /etc/sensu/config.json
CMD /etc/init.d/sensu-server start && \
/etc/init.d/sensu-api start && \
tail -f /var/log/sensu/sensu-server.log -f /var/log/sensu-api.log
sensors-production runs sensu-client along with a dumb metric, here is his Dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
} \
}' >> /etc/sensu/config.json
RUN mkdir -p /etc/sensu/conf.d
RUN echo '{ \
"client": { \
"name": "wise_oracle", \
"address": "prod_sensors", \
"subscriptions": [ \
"web", "aws" \
] \
} \
' >> /etc/sensu/conf.d/client.json
RUN echo '{ \
"checks": { \
"dumb": { \
"command": "ls", \
"subscribers": [ \
"web" \
], \
"interval": 10 \
} \
} \
}' >> /etc/sensu/conf.d/dumb.json
CMD /etc/init.d/sensu-client start && \
tail -f /var/log/sensu/sensu-client.log
Running
docker-compose up -d
Everything goes OK. No errors in the logs, I can access the uchiwa dashboard, which shows me the defined client alright (keepalive requests seems to be OK). However, no check is available.
I noticed that no check request / check result is present in the log, as if the sensu server consider there is no check to run. Although, I have no idea why is that.
Could someone tell me what's going on more precisely? Thank you.
Check request/result will delivered via RabbitMQ, you can access to http://yourrabbitmqserver:15672 to see the queue and subscribed consumers.
Also make sure your server have some check.json files placed in /sensu/conf.d to schedule checks base on their interval