I am following the instructions at: http://apiaxle.com/docs/statistics-and-analytics-in-apiaxle/ . Unfortunately currently (May 17, 2014) apiAxle is redirecting me to the endPointserver and I am not getting statist
menelaos:~$ curl 'http://localhost:3000/v/api/test/stats?
granularity=hour&format_timestamp=ISO'
Response:
{"meta":{"version":1,"status_code":404},"results":{"error":
{"type":"ApiUnknown","message":"No api specified (via subdomain)"}}}
I also tried using the subdomain but that didn't work either:
menelaos:~$ curl 'http://test.api.localhost:3000/v/api/test/stats?granularity=hour&format_timestamp=ISO'
Typically you run multiple instances of apiaxle-proxy (which provides access to your endpoints), and a single instances of apiaxle-api (which provides access to statistics, key creation, and other API management functionality).
For example, you might be running the proxy like this:
apiaxle-proxy -f 1 -p 3000 -q
To run the API, you would run something like this:
apiaxle-api -f 1 -p 5000 -q
Note that the API needs to run on a separate port. Also note that it shouldn't be accessible to the outside world as it doesn't have any authentication.
Using the above example, your curl command would look like this:
curl -H 'content-type: application/json' \
-X GET \
'http://localhost:5000/v1/api/test/stats' \
-d '{"granularity":"hour","format_timestamp":"ISO"}'
Note that the parameters need to be sent as JSON.
Related
I am developing a dashboard that connects to Splunk via REST API and displays data on various charts/graphs etc. In order to get the data I have to make a POST request via curl (node.js). Everything is working great. However when I try to make a Post request with a dbxquery, it fails and returns 'fatal dbxquery unknown command.' I was wondering if anyone had encountered this before.
curl -H 'Authorization: Basic auth token' -k https://devfg.com:8089/services/search/jobs -d search=" | dbxquery query=\"SELECT count(*) FROM db.table\" connection=\"connection\"" -d output_mode=json
Are the permissions for the dbxquery command set to be executable from any app? Check under app permissions to see if the command is globally exported.
Alternatively, you may need to escape the *, so \*.
Otherwise, you should be able to run the dbxquery via a curl command.
We would like to get the metadata out of the file system. Is there anything like fsImage which stores such a medata information? We used following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS'
But this gives only lists only one level metadata. As per HDFS Api documentation, tried using following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS_BATCH&startAfter=<CHILD> #added code style
But it gives error that it is not implemented.
I checked with Azur support and they mentioned that not all the methods provided by Hadoop are implemented. So in my case LISTSTATUS_BATCH is not implmented.
I am automating some continuous delivery processess that use openshift 3.5. They work fine from a command line, but I can hardly find any documentation of how the oc commands map to the OCP REST API. I've figured out how talk to the API and use what it directly offers. For example, I have a line:
oc process build-template -p APPLICATION_NAME=worldcontrol -n openshift | oc create -f - -n conspiracyspace
That takes a template named "build-template" from "openshift" namespace and processes it, piping the resulting definition to build a few objects like application image, into another namespace. I would appreciate an example of how this could be expressed in http request terms.
edit
Following #Graham's hint, here is what I got. First request is getting the contents of the template:
curl -k -v -XGET -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Authorization: Bearer ...." -H "Accept: application/json, */*" https://example.com/oapi/v1/namespaces/openshift/templates/build-template
Then apparently the oc client expands the parameters internally, and feeds the result into the POST:
curl -k -v -XPOST -H "Content-Type: application/json" -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Accept: application/json, */*" -H "Authorization: Bearer ...." https://example.com/oapi/v1/namespaces/openshift/processedtemplates
Run the oc command with the option --loglevel=10. This will show you what REST API calls it makes underneath and thus you can work out what you need to do to do the same thing with just the REST API. Do note that certain things may be partly done in the oc client, rather than delegating to a REST API endpoint call.
I did this, and at the very end of the output from the CLI, I saw this:
service "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
deploymentconfig "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
When piping to oc create -f -, then, the CLI must be inspecting the resulting template and creating each object in the objects array. No evidence of those calls were outputted to my command window, other than the two "created" statements.
So to fully automate this through the REST API, we would still need to parse that objects array returned by processtemplates and POST to the appropriate endpoints, correct?
I trying to debug a REST extension in MarkLogic by using xdmp:log() inside the XQuery. Seems I am having some issues invoking a POST call in general actually...? Bit confused now.
I use the exact example code as from the MarkLogic documentation here
I installed it via Roxy > deploy > ext
It is there when I look into http://host:port/v1/config/resources
The PUT command provided in the doc works and returns the "Done".
But I cannot get the POST statement to dump the xdmp:log messages in the errorlog.txt on the server?
I tried several curl commands:
curl --anyauth --user admin:admin -X POST http://host:8040/LATEST/resources/example
curl: (52) Empty reply from server
Question: What is the correct curl command to trigger the examples POST functon so something shows up in the log?
This is a curl issue. You need to specify a request body for curl to send.
curl --anyauth --user admin:admin -X POST -d '{"key":"value"}' http://host:8040/LATEST/resources/example
Or if you want to send an empty body just do this:
curl --anyauth --user admin:admin -X POST -d '' http://host:8040/LATEST/resources/example
I'm trying to use the REST API on Couchbase 2.2 and I'm finding two things that I cannot seem to do via REST:
Init a new cluster when no other nodes exist.
CLI version:
couchbase-cli cluster-init -u admin -p mypw -c localhost:8091 --cluster-init-ramsize=1024
Remove a healthy node from the cluster.
CLI version:
couchbase-cli rebalance -u admin -p mypw -c 10.10.1.10:8091 --server-remove=10.10.1.12
As for removing a node, I've tried:
curl -u admin:mypw -d otpNode=ns_1#10.10.1.12 \
http://10.10.1.10:8091/controller/ejectNode
Which returns: "Cannot remove active server."
I've also tried:
curl -s -u Administrator:myclusterpw \
-d 'ejectedNodes=ns_1%4010.10.1.12&knownNodes=ns_1%4010.10.1.10%2Cns_1%4010.10.1.11' \
http://10.10.1.10:8091/controller/rebalance
Which returns: {"mismatch":1} (presumably due to the node actually not being marked for ejection?)
Am I crazy, or are there no ways to do these things using curl?
I span up a two node cluster on aws (10.170.76.236 and 10.182.151.86), I was able to remove node 10.182.151.86 using the below curl request
curl -v -u Administrator:password -X POST 'http://10.182.151.86:8091/controller/rebalance' -d 'ejectedNodes=ns_1#10.182.151.86&knownNodes=ns_1#10.182.151.86,ns_1#10.170.76.236'
That removes the node and performs the rebalance leaving only '10.170.76.236' as the single node. Running this request below results in 'Cannot remove active server' as you have experienced.
curl -u Administrator:password -d otpNode=ns_1#10.170.76.236 http://10.170.76.236:8091/controller/ejectNode
This is because you can't remove the last node as you can't perform a rebalance, this issue is covered here http://www.couchbase.com/issues/browse/MB-7517
I left the real IP's in that I used so the curl requests are as clear as possible, I've terminated the nodes now though :)
Combo of:
curl -X POST -u admin:password -d username=Administrator \
-d password=letmein \
-d port=8091 \
http://localhost:8091/settings/web
and
curl -X POST -u admin:password -d memoryQuota=400 \
http://localhost:8091/pools/default
Ticket raised against this indicates that the ejectnode command itself won't work by design.
Server needs to either be pending or failover state to use that command seemingly