Cear output from rest api with jq - rest

I am wanting to be able to use rest API and am discouraged about the barely readable json output. Can jq format json into a simpler more clear data. If JQ does not do this does anyone know of a tool that would Make Rest API more readable to humans?

There are many tools that will allow you to "pretty-print" JSON to make it easier to read. In the case of jq, simply presenting the JSON to jq . (e.g. curl ... | jq .) will pretty-print it, unless it is ridiculously large (i.e., too large to fit into the computer's memory), in which case you probably won't want to pretty-print it anyway.
If the issue is that even pretty-printing the JSON is insufficient to make it comprehensible, then you might find a simple "schema inference engine" appropriate. The one I wrote at https://gist.github.com/pkoppstein/a5abb4ebef3b0f72a6ed (schema.jq) produces an easy-to-understand structural schema. Examples, including examples of how to invoke it, are available at https://github.com/stedolan/jq/wiki/X---Experimental-Benchmarks
The simplest way to use it with curl would be to download it as schema.jq, uncomment the very last line, and then invoke it along the following lines:
curl .... | jq -f schema.jq
E.g.:
curl -Ss https://gitlab.cern.ch/slac_sandbox/ubjson/-/raw/504d419d3e6a4ab87488fcc750bb79c6f5471491/benchmarks/files/jeopardy/jeopardy.json |
jq --arg nullable true -f schema.jq
{
"air_date": "string",
"answer": "string",
"category": "string",
"question": "string",
"round": "string",
"show_number": "string",
"value": "string"
}

Related

Rest API Best Practice - Single action and Bulk

Let my server have the ability to perform an action called 'A'.
Now, My server needs to have an extra ability to perform bulk 'A' actions.
The route on the server is:
/entity/:entityId/'A'/:'A'Id
Adding the bulk ability faced me with two approaches:
1) Exposing 2 routes to each method:
/entity/:entityId/'A'/:'A'Id and
/entity/:entityId/'A' with a list of 'A' ids in the request's body.
2) Drop the 'A'Id paramater and add a query parameter to the first route called bulk with boolean value:
/entity/:entityId/'A'/?bulk=boolean
And if bulk == true look for 'A'Id[] at the request's body.
else if bulk == false look for id entry at the request's body.
I'm feeling that the 1st approach is better, I'd love to hear thoughts, or maybe a very different approach.
Any opinion is blessed to hear,
Thanks.
Query params are good for GET methods like:
curl -X GET host.com/megacorp/employee?employee_id[]=1&employee_id[]=2
But for POST and PUT methods it's better to use something like this:
curl -XPOST host.com/megacorp/employee/_bulk -d '{"data":[
{"id":"1", "name": "John Doe"},
{"id":"2", "name": "Jane Doe"}
]}'
And to POST or PUT 1 resource - simply provide 1 object in request, like:
curl -XPUT host.com/megacorp/employee/1 -d '{
"name": "JOHN DOE"
},'

GCS storage create folder API

Having trouble creating folder in a bucket in Google Cloud storage thru a API
I have already tried the curl call for the API with all varying possibilities for the request json format.
ABC is not really the organization but gave it to obfuscate real data. I have also setup the variable $access_token using gcloud call to get access-token.
curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $access_token" -d '{"displayName":"[vicks]"}' https://cloudresourcemanager.googleapis.com/v2/folders?parent=ABC
{
"error": {
"code": 400,
"message": "field [Folder.display_name] has issue [invalid format]",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "Folder.display_name",
"description": "invalid format"
}
]
}
]
}
}
I am expecting the API call to create the directory but it fails showing error in display_name format though I have followed the document at https://cloud.google.com/resource-manager/docs/creating-managing-folders
Unfortunately, the link you provided is about the API that will help you create a folder within your organization and not inside a Cloud Storage bucket. This can be seen by looking at the first graph in the page you linked.
Thankfully, there is still a solution that might help you achieve what you’re looking for. While Google Cloud Storage objects are stored in a flat namespace, tools like gsutil or the even the Google Cloud Console can provide a hierarchical view of the objects by following simple naming rules (it is essentially an emulation of subdirectories).
1) To treat an object as a directory, you can create an empty object that ends with “/”. If you want a subdirectory called ‘abc’ then you can call the object ‘abc/’ and gsutil will treat it like an empty directory. To insert an object/file into the subdirectory, you may simply copy an object/file to the destination URL including ‘abc’ such as “gs://your-bucket/abc”.
2) Another less commonly used way is to create an empty object that ends with “$folder$”. If you want a subdirectory called ‘abc’ then you can call the object ‘abc$folder$’ and gsutil will treat it like an empty directory. Similarly to the previous point, to insert an object/file into the subdirectory, you may simply copy an object/file to the destination URL including ‘abc’ such as “gs://your-bucket/abc”.
I would highly suggest you read through this link to get a great understanding of how Google Cloud Storage subdirectories work. Additionally, I’ve found another previously answered stack question that is very relevant to your question and can be of great help to you as well.

Can't define date type index on elastisearch

I've got a document sent to elasticsearch that looks something like this:
{
"created": 1543247749419,
"name": "something",
"person": {
"created": 1543247012491,
...
}
}
Both created fields are epoch_millis date format (Timestamp in milliseconds). I tried basically 3 things:
Add the document using curl like this:
curl -H "Content-Type: application/json" -X POST "http://ipaddress:9200/somedb" -d "#/some/path"
So far so good, but the index set the type of my created as long, not date.
Copy the index from the Kibana interface, change the long for date and create a new db for it:
{
"mapping": {
"somedb2": {
"properties": {
"created": {
"type": "date",
"format": "x"
},
and send the data like this:
curl -H "Content-Type: application/json" -X POST "http://ipaddress:9200/somedb2" -d "#/some/path"
Then I received this error message from elasticsearch
{ "error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason":"Root mapping definition has unsupported parameters: [mapping : {properties={created={type=date, format=x},
Right now I don't really know what to do. Searching on the interwebz basically only talks about the formatting section and not much about configuring or creating the index. Do I need a plugin for elasticsearch to handle date?
JSON (which is the data format of ElasticSearch) doesn't have an explicit format for date, they are always treated as strings, even when delivered in another way.
So, actually, if you do NOT specify a format for date, this option is taken into consideration: "strict_date_optional_time||epoch_millis", which is epoch millis - which again is correct in your case.
That's why everybody is just talking about formatting and not converting ;)
I figure it out.
It appears that the curl command I made had an error in the dbname used and the name used in the mapper. There was an error on the JSON too, making it harder to understand. It is now working gracefully. I also changed the format to epoch_millis instead of x.
Now it works like a charm and made my first dashboard in Kibana.

Azure remaining credit value from API

Whenever I log on the azure web portal i get a notification saying $xx.xx remaining credit.
I would like to understand how I can retrieve this value using Azure API, az cli or Powershell.
I have tried using the following as advised by the support but unfortunately without luck:
accesstoken=$(curl -s --header "accept: application/json" --request POST "https://login.windows.net/$TennantID/oauth2/token" --data-urlencode "resource=https://management.core.windows.net/" --data-urlencode "client_id=$ClientID" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=$ClientSecret" | jq -r '.access_token')
subscriptionURI="https://management.azure.com/subscriptions/$SubscriptionID?api-version=2016-09-01"
curl -s --header "authorization: Bearer $accesstoken" --request GET $subscriptionURI | jq .
The spending limit variable says just "on"
{
"authorizationSource": "RoleBased",
"subscriptionPolicies": {
"spendingLimit": "On",
redacted (too long)
When you see "$xx credit remaining", it means your account has spending limit. There are many offerings which use spending limit mode. For example, you are given Azure Pass, or Microsoft Azure for Students Starter. Here is the list for reference (https://azure.microsoft.com/en-us/support/legal/offer-details/).
In fact, Azure provides some REST APIs to let you work with Billing and Consumption but the API is limited to spending limit offerings. Advanced billing and consumption REST API are only fully supported for CSP (Cloud Service Provider) partner and Enterprise Agreement customers. That's said, with your current subscription, you cannot retrieve the balance summary (e.g https://learn.microsoft.com/en-us/rest/api/billing/enterprise/billing-enterprise-api-balance-summary).
However, there is a workaround to get usage details then perform SUM. To do so, first you need to retrieve billing period name
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Billing/billingPeriods?api-version=2017-04-24-preview
Here is the sample response of my billing periods:
"value": [
{
"id": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1",
"type": "Microsoft.Billing/billingPeriods",
"name": "201805-1",
"properties": {
"billingPeriodStartDate": "2018-02-06",
"billingPeriodEndDate": "2018-03-05"
}
},
{
"id": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201804-1",
"type": "Microsoft.Billing/billingPeriods",
"name": "201804-1",
"properties": {
"billingPeriodStartDate": "2018-01-06",
"billingPeriodEndDate": "2018-02-05"
}
},
...then get usage detail with the billing period name
https://management.azure.com/subscriptions/{Subscription}/providers/Microsoft.Billing/billingPeriods/201805-1?api-version=2017-04-24-preview
{
"id": "subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1/providers/Microsoft.Consumption/usageDetails/cdd74390-374b-53cf-2260-fc8ef10a2be6",
"name": "cdd74390-374b-53cf-2260-fc8ef10a2be6",
"type": "Microsoft.Consumption/usageDetails",
"tags": null,
"properties": {
"billingPeriodId": "subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1",
"usageStart": "2018-02-06T00:00:00Z",
"usageEnd": "2018-02-07T00:00:00Z",
"instanceId": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/resourceGroups/securitydata/providers/Microsoft.Storage/storageAccounts/2d1993southeastasia",
"instanceName": "2d1993southeastasia",
"meterId": "c1635534-1c1d-4fc4-b090-88fc2672ef87",
"usageQuantity": 0.002976,
"pretaxCost": 7.1424E-05,
"currency": "USD",
"isEstimated": false,
"subscriptionGuid": "2dd8cb59-ed12-4755-a2bc-356c212fbafc",
"meterDetails": null
}
},
This returns list of each resource's usage and cost in JSON format and you need to SUM all pretaxCost value. This way would be complicated with PowerShell because you need to initialize objects, then deserialize JSON and perform MATH. With PowerShell, it can be done technically but requires C# experience.

Marketo "Import Lead" fails with error 610 Requested resource not found

I'm trying to batch update a bunch of existing records through Marketo's REST API. According to the documentation, the Import Lead function seems to be ideal for this.
In short, I'm getting the error "610 Resource Not Found" upon using the curl sample from the documentation. Here are some steps I've taken.
Fetching the auth_token is not a problem:
$ curl "https://<identity_path>/identity/oauth/token?
grant_type=client_credentials&client_id=<my_client_id>
&client_secret=<my_client_secret>"
Proving the token is valid, fetching a single lead isn't a problem as well:
# Fetch the record - outputs just fine
$ curl "https://<rest_path>/rest/v1/lead/1.json?access_token=<access_token>"
# output:
{
"requestId": "ab9d#12345abc45",
"result": [
{
"id": 1,
"updatedAt": "2014-09-18T13:00:00+0000",
"lastName": "Potter",
"email": "harry#hogwartz.co.uk",
"createdAt": "2014-09-18T12:00:00+0000",
"firstName": "Harry"
}
],
"success": true
}
Now here's the pain, when I try to upload a CSV file using the Import Lead function. Like so:
# "Import Lead" function
$ curl -i -F format=csv -F file=#test.csv -F access_token=<access_token>
"https://<rest_path>/rest/bulk/v1/leads.json"
# results in the following error
{
"requestId": "f2b6#14888a7385a",
"success": false,
"errors": [
{
"code": "610",
"message": "Requested resource not found"
}
]
}
The error codes documentation only states Requested resource not found, nothing else. So my question is: what is causing the 610 error code - and how can I fix it?
Further steps I've tried, with no success:
Placing the access_token as url parameter (e.g. appending '?access_token=xxx' to the url), with no effect.
Stripping down the CSV (yes, it's comma seperated) to a bare minimum (e.g. only fields 'id' and 'lastName')
Looked at the question Marketo API and Python, Post request failing
Verified that the CSV doesn't have some funky line endings
I have no idea if there are specific requirements for the CSV file, like column orders, though...
Any tips or suggestions?
Error code 610 can represent something akin to a '404' for urls under the REST endpoint, i.e. your rest_path. I'm guessing this is why you are getting that '404': Marketo's docs show REST paths as starting with '/rest', yet their rest endpoint ends with /rest, so if you follow their directions you get an url like, xxxx.mktorest.com/rest/rest/v1/lead/..., i.e. with '/rest' twice. This is not correct. Your url must have only one 'rest/'.
I went through the same trouble, just want to share some points that help resolve my problem.
Bulk API endpoints are not prefixed with ‘/rest’ like other endpoints.
Bulk Import uses the same permissions model as the Marketo REST API and does not require any additional special permissions in order to use, though specific permissions are required for each set of endpoints.
As #Ethan Herdrick suggested, the endpoints in the documentation are sometimes prefixed with an extra /rest, make sure to remove that.
If you're a beginner and need step-by-step instructions to set up permissions for Marketo REST API: Quick Start Guide for Marketo REST API