Does HTTPie have the equivalent of curl's -d option? - rest

I want to query a REST API with HTTPie. I am usuale to do so with curl, with which I am able to specify maxKeys and startAfterFilename e.g.
curl --location --request GET -G \
"https://some.thing.some.where/data/v1/datasets/mydataset/versions/2/files" \
-d maxKeys=100 \
-d startAfterFilename=YYYMMDD_HHMMSS.file \
--header "Authorization: verylongtoken"
How can I use those -d options in HTTPie?

In your case the command looks like this:
http -F https://some.thing.some.where/data/v1/datasets/mydataset/versions/2/files \
Authorization:verylongtoken \
startAfterFilename=="YYYMMDD_HHMMSS.file" \
maxKeys=="100"
Although, there is a bunch of methods to pass some data with httpie. For example
http POST http://example.com/posts/3 \
Origin:example.com \ # : HTTP headers
name="John Doe" \ # = string
q=="search" \ # == URL parameters (?q=search)
age:=29 \ # := for non-strings
list:='[1,3,4]' \ # := json
file#file.bin \ # # attach file
token=#token.txt \ # =# read from file (text)
user:=#user.json # :=# read from file (json)
Or, in the case of forms
http --form POST example.com \
name="John Smith" \
cv=#document.txt

Related

How we can Filter the Insight data for multiple specific campaigns in Facebook marketing?

I'm trying to get Insight of an Ad-Account by filtering using multiple specific campaigns, I was able to filter with single specific campaigns
Here is the code which I tried for single specific campaigns
https://graph.facebook.com/v12.0/act_YOUR_ACCOUNT_ID/insights?fields=actions,reach,impressions,clicks,cpc,spend&filtering=[{field: "campaign.id",operator:"CONTAIN", value: '123456789'}}]
You have 2 options:
You can build a list of campaigns by your specific condition and then you use IN operator instead of CONTAIN operator like this:
https://graph.facebook.com/v12.0/act_YOUR_ACCOUNT_ID/insights?fields=actions,reach,impressions,clicks,cpc,spend&filtering=[{field: "campaign.id",operator:"IN", value: ['id1', 'id2', 'id3']}]
You can try to use batch request like this (documentation here and here):
curl \
-F 'access_token=<ACCESS_TOKEN>' \
-F 'batch=[ \
{ \
"method": "GET", \
"relative_url": "v12.0/act_YOUR_ACCOUNT_ID/insights?fields=impressions,spend,ad_id,adset_id&filtering=[{field: "campaign.id",operator:"CONTAIN", value: '123456789'}]" \
}, \
{ \
"method": "GET", \
"relative_url": "v12.0/act_YOUR_ACCOUNT_ID/insights?fields=impressions,spend,ad_id,adset_id&filtering=[{field: "campaign.id",operator:"CONTAIN", value: '222222222'}]" \
}, \
]' \
'https://graph.facebook.com'

Yocto - tools-profile' in IMAGE_FEATURES (added via EXTRA_IMAGE_FEATURES) is not a valid image feature

I am trying to install the tools-profile in the yocto but I get an error saying that tools-profile is not a valid option. How can I debug this? How to check why is it failing? Here is how I tried it.
Here is my bblayers.conf
LCONF_VERSION = "7"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
${TOPDIR}/../poky/meta \
${TOPDIR}/../poky/meta-poky \
${TOPDIR}/../poky/meta-yocto-bsp \
${TOPDIR}/../layers/meta-gplv2 \
${TOPDIR}/../layers/meta-xilinx/meta-xilinx-bsp \
${TOPDIR}/../layers/openembedded-core/meta \
${TOPDIR}/../layers/meta-openembedded/meta-oe \
${TOPDIR}/../layers/meta-openembedded/meta-multimedia \
${TOPDIR}/../layers/meta-openembedded/meta-networking \
${TOPDIR}/../layers/meta-openembedded/meta-python \
${TOPDIR}/../layers/meta-custom \
"
BBLAYERS_NON_REMOVABLE ?= " \
${TOPDIR}/../poky/meta \
${TOPDIR}/../poky/meta-poky \
"
In the local.conf, I have added the following.
EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile"
Probably too late but I will answer this question as this happened to me yesterday.
The issue is that you have an image that inherits only image.bbclass. If you look into image.bbclass you will see that it doesn't know anything about tools-profile but core-image.bbclass does.
All you have to do is to change inherit image to inherit core-image in an image that is throwing the error.
In my case it was swupdate-image.

How to update the origination_urls when creating a new Trunk using twilio API

Thanks to this tutorial: https://www.twilio.com/docs/sip-trunking/api/trunks#action-create I am able to CRUD create, read, update and delete trunks on my Twilio account.
To create a new trunk I do it like so:
curl -XPOST https://trunking.twilio.com/v1/Trunks \
-d "FriendlyName=MyTrunk" \
-u '{twilio account sid}:{twilio auth token}'
and this is the response I get when creating a new trunk:
{
"trunks": [
{
"sid": "TKfa1e5a85f63bfc475c2c753c0f289932",
"account_sid": "ACxxx",
....
....
"date_updated": "2015-09-02T23:23:11Z",
"url": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932",
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
"credential_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/CredentialLists",
"ip_access_control_lists": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/IpAccessControlLists",
"phone_numbers": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/PhoneNumbers"
}
}],
"meta": {
"page": 0,
"page_size": 50,
... more
}
}
What I am interested from the response is:
"links": {
"origination_urls": "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls",
Now if I perform a get command on that link like:
curl -G "https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls" -u '{twilio account sid}:{twilio auth token}'
I get back this:
{
"meta":
{
"page": 0,
"page_size": 50,
"first_page_url":
....
},
"origination_urls": []
}
Now my goal is to update the origination_urls. So using the same approach I used to update a trunk I have tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=sip:200#somedomain.com" \
-u '{twilio account sid}:{twilio auth token}'
But that fails. I have also tried:
curl -XPOST https://trunking.twilio.com/v1/Trunks/TKfa1e5a85f63bfc475c2c753c0f289932/OriginationUrls \
-d "origination_urls=['someUrl']" \
-u '{twilio account sid}:{twilio auth token}'
and that fails too. How can I update the origination_urls?
I was missing to add Priority, FriendlyName, SipUrl, Weight and Enabled on my post request. I finally got it to work by doing:
curl -XPOST "https://trunking.twilio.com/v1/Trunks/TKfae10...../OriginationUrls" -d "Priority=10" -d "FriendlyName=Org1" -d "Enabled=true" -d "SipUrl=sip:test#domain.com" -d "Weight=10" -u '{twilio account sid}:{twilio auth token}'

Parse: Creating a New Class Programmatically

Is it possible to create a new Class programmatically (i.e. not from the dashboard) via any of the API's or the Parse CLI?
The REST API appears to have functionality to fetch, modify and delete individual Schemas (classes) but not to add them. (https://parse.com/docs/rest/guide#schemas).
Hoping for something like the following:
curl -X ADD \
-H "X-Parse-Application-Id: XXXXXX" \
-H "X-Parse-Master-Key: XXXXXXXX" \
-H "Content-Type: application/json" \
https://api.parse.com/1/schemas/City
You seem to have skipped the part which deals with adding schema in the documentation. To create a new class, according to documentation, You use following method in cURL:
curl -X POST \
-H "X-Parse-Application-Id: Your APP Id" \
-H "X-Parse-Master-Key: Your master key" \
-H "Content-Type: application/json" \
-d '
{
"className": "Your Class name goes here",
"fields": {
"Your field name here": {
"type": "Your field's data type e.g. String, Int etc. Add multiple fields if you want"
}
}
}' \
https://api.parse.com/1/schemas/[Your class name]
Or in Python:
import json,httplib
connection = httplib.HTTPSConnection('api.parse.com', 443)
connection.connect()
connection.request('POST', '/1/schemas/Game', json.dumps({
"className":"[Your class name]","fields":{"Your field name":{"type":"your field's data type"} }
}), {
"X-Parse-Application-Id": "7Lo3U5Ei75dragCphTineRMoCfwD7UJjd1apkPKX",
"X-Parse-Master-Key": "ssOXw9z1ni1unx8tW5iuaHCmhIObOn4nSW9GHj5W",
"Content-Type": "application/json"
})
result = json.loads(connection.getresponse().read())
print result

Unable to run sensu check in a docker-compose context

I am dockerizing sensu infrastructure. Everything goes fine except the execution of checks.
I am using docker-compose according to this structure (docker-compose.yml):
sensu-core:
build: sensu-core/
links:
- redis
- rabbitmq
sensors-production:
build: sensors-production/
links:
- rabbitmq
uchiwa:
build: sensu-uchiwa
links:
- sensu-core
ports:
- "3000:3000"
rabbitmq:
build: rabbitmq/
redis:
image: redis
command: redis-server
My rabbitmq Dockerfile is pretty straightforward:
FROM ubuntu:latest
RUN apt-get -y install wget
RUN wget http://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb
RUN dpkg -i erlang-solutions_1.0_all.deb
RUN wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
RUN apt-key add rabbitmq-signing-key-public.asc
RUN echo "deb http://www.rabbitmq.com/debian/ testing main" | sudo tee /etc/apt/sources.list.d/rabbitmq.list
RUN apt-get update
RUN apt-get -y install erlang rabbitmq-server
CMD /etc/init.d/rabbitmq-server start && \
rabbitmqctl add_vhost /sensu && \
rabbitmqctl add_user sensu secret && \
rabbitmqctl set_permissions -p /sensu sensu ".*" ".*" ".*" && \
cd /var/log/rabbitmq/ && \
ls -1 * | xargs tail -f
So do the uchiwa Dockerfile:
FROM podbox/sensu
RUN apt-get -y install uchiwa
RUN echo ' \
{ \
"sensu": [ \
{ \
"name": "Sensu", \
"host": "sensu-core", \
"port": 4567, \
"timeout": 5 \
} \
], \
"uchiwa": { \
"host": "0.0.0.0", \
"port": 3000, \
"interval": 5 \
} \
}' > /etc/sensu/uchiwa.json
EXPOSE 3000
CMD /etc/init.d/uchiwa start && \
tail -f /var/log/uchiwa.log
Sensu core runs sensu-server & sensu-api. Here is his dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
}, \
"redis": { \
"host": "redis", \
"port": 6379 \
}, \
"api": { \
"host": "localhost", \
"port": 4567 \
} \
}' >> /etc/sensu/config.json
CMD /etc/init.d/sensu-server start && \
/etc/init.d/sensu-api start && \
tail -f /var/log/sensu/sensu-server.log -f /var/log/sensu-api.log
sensors-production runs sensu-client along with a dumb metric, here is his Dockerfile:
FROM podbox/sensu
RUN apt-get -y install sensu
RUN echo '{ \
"rabbitmq": { \
"host": "rabbitmq", \
"vhost": "/sensu", \
"user": "sensu", \
"password": "secret" \
} \
}' >> /etc/sensu/config.json
RUN mkdir -p /etc/sensu/conf.d
RUN echo '{ \
"client": { \
"name": "wise_oracle", \
"address": "prod_sensors", \
"subscriptions": [ \
"web", "aws" \
] \
} \
' >> /etc/sensu/conf.d/client.json
RUN echo '{ \
"checks": { \
"dumb": { \
"command": "ls", \
"subscribers": [ \
"web" \
], \
"interval": 10 \
} \
} \
}' >> /etc/sensu/conf.d/dumb.json
CMD /etc/init.d/sensu-client start && \
tail -f /var/log/sensu/sensu-client.log
Running
docker-compose up -d
Everything goes OK. No errors in the logs, I can access the uchiwa dashboard, which shows me the defined client alright (keepalive requests seems to be OK). However, no check is available.
I noticed that no check request / check result is present in the log, as if the sensu server consider there is no check to run. Although, I have no idea why is that.
Could someone tell me what's going on more precisely? Thank you.
Check request/result will delivered via RabbitMQ, you can access to http://yourrabbitmqserver:15672 to see the queue and subscribed consumers.
Also make sure your server have some check.json files placed in /sensu/conf.d to schedule checks base on their interval