We would like to get the metadata out of the file system. Is there anything like fsImage which stores such a medata information? We used following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS'
But this gives only lists only one level metadata. As per HDFS Api documentation, tried using following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS_BATCH&startAfter=<CHILD> #added code style
But it gives error that it is not implemented.
I checked with Azur support and they mentioned that not all the methods provided by Hadoop are implemented. So in my case LISTSTATUS_BATCH is not implmented.
Related
I want to know how to get this data from the GitHub API.
I know I can get this data as raw but is there any way to get this using GitHub API?
Here is the requested file:
https://github.com/graphql-compose/graphql-compose-examples/blob/master/examples/northwind/data/csv/employees.csv
As seen here, try
curl -GLOf -H "Authorization: token ${GITHUB_TOKEN?not set}" \
-H "Accept: application/vnd.github.v4.raw" \
"https://api.github.com/repos/$ORG/$REPO/contents/$FILEPATH" -d ref="$REVISION"
In your case, for a public repository:
curl -GLOf -H "Accept: application/vnd.github.v4.raw" \
"https://api.github.com/repos/graphql-compose/graphql-compose-examples/contents/examples/northwind/data/csv/employees.csv"
I just tested it, and got a file employees.csv with its raw content in it.
I need to programmatically retrieve and set default route of a kibana space. On Kibana application, this can be set at stack management -> Advanced Settings page. I looked at the elasticsearch REST documentation, but could not find a suitable API. Any help is appreciated.
Not sure what version you are referring to. But the documentation link in the original post points to 7.13. And the suggestion below (from https://discuss.elastic.co/t/kibana-7-5-server-defaultroute-parameter/212191/4) works with 7.13:
these advanced settings are stored as a Saved Object of type config. So you can update them using the Saved Object APIs
So here's how I used the API to 'set' and 'retrieve' the defaultRoute in and from my space:
$ curl --user elastic:******* -X PUT "localhost:${ES_KB_PORT}/s/${my_space}/api/saved_objects/config/7.13.2" -H "kbn-xsrf: true" -H "Content-Type: application/json" -d '{"attributes": {"defaultRoute": "/app/dashboards#/view/285828e0-b713-11eb-aba9-211e5623385d"}}'
{"id":"7.13.2","type":"config","updated_at":"2021-08-02T08:53:19.824Z","version":"WzEzOTksMl0=","namespaces":["mz81"],"attributes":{"defaultRoute":"/app/dashboards#/view/285828e0-b713-11eb-aba9-211e5623385d"}}
$ curl --user elastic:******* -X POST "localhost:${ES_KB_PORT}/s/${my_space}/api/saved_objects/_export" -H "kbn-xsrf: true" -H "Content-Type: application/json" -d '{"type": "config"}'
{"attributes":{"buildNum":40943,"defaultRoute":"/app/dashboards#/view/285828e0-b713-11eb-aba9-211e5623385d"},"coreMigrationVersion":"7.13.2","id":"7.13.2","migrationVersion":{"config":"7.13.0"},"references":[],"sort":[1627896123137,107],"type":"config","updated_at":"2021-08-02T09:22:03.137Z","version":"WzE0MDYsMl0="}
{"exportedCount":1,"missingRefCount":0,"missingReferences":[]}
I'm trying out Rest Proxy in Kafka.
When I type the following url in my browser http://192.168.0.30:8082/topics,
I get the expected results :
["__confluent.support.metrics","_confluent-command","_confluent-controlcenter-5-
2-2-1-MetricsAggregateStore-changelog","_confluent-controlcenter-5-2-2-1-actual-
group-consumption-rekey","_confluent-controlcenter-5-2-2-1-expected-group-
consumption-rekey","_confluent-controlcenter-5-2-2-1-metrics-trigger-measurement-
rekey","_confluent-ksql-default__command_topic","_confluent-metrics","_confluent-
monitoring","_schemas","connect-configs","connect-offsets","connect-
statuses","default_ksql_processing_log","test","test1"]
My question : I try not to use CURL. I have the following CURL command examples . If I want to use only my browser like above, how can I change it?
I tried this, but... (How can I consume my topic test?)
**Just an example from a document : **
# Create a consumer for binary data, starting at the beginning of the topic's
# log. Then consume some data from a topic.
$ curl -X POST -H "Content-Type: application/vnd.kafka.v1+json" \
--data '{"id": "my_instance", "format": "binary", "auto.offset.reset": "smallest"}' \
http://localhost:8082/consumers/my_binary_consumer
{"instance_id":"my_instance","base_uri":"http://localhost:8082/consumers/my_binar
y_consumer/instances/my_instance"}
$ curl -X GET -H "Accept: application/vnd.kafka.binary.v1+json" \
http://localhost:8082/consumers/my_binary_consumer/instances/my_instance/topics/test
[{"key":null,"value":"S2Fma2E=","partition":0,"offset":0}]
Browsers can only issue GET requests.
You could use tools like Postman or Insomnia to issue other HTTP requests.
(For further reference)
I used Postman in order to use REST Proxy for Kafka.
1. I subscribed to a topic test.
$ curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data
'{"topics":["test"]}' \
http://192.168.0.30:8082/consumers/my_json_consumer/instances/my_consumer_instanc
e/subscription
( I changed this CURL to fit into Postman. )
2. Then, I consumed the topic.
$ curl -X GET -H "Accept: application/vnd.kafka.json.v2+json" \
http://192.168.0.30:8082/consumers/my_json_consumer/instances/my_consumer_instanc
e/records
(I changed this CURL to fit into Postman.)
I am automating some continuous delivery processess that use openshift 3.5. They work fine from a command line, but I can hardly find any documentation of how the oc commands map to the OCP REST API. I've figured out how talk to the API and use what it directly offers. For example, I have a line:
oc process build-template -p APPLICATION_NAME=worldcontrol -n openshift | oc create -f - -n conspiracyspace
That takes a template named "build-template" from "openshift" namespace and processes it, piping the resulting definition to build a few objects like application image, into another namespace. I would appreciate an example of how this could be expressed in http request terms.
edit
Following #Graham's hint, here is what I got. First request is getting the contents of the template:
curl -k -v -XGET -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Authorization: Bearer ...." -H "Accept: application/json, */*" https://example.com/oapi/v1/namespaces/openshift/templates/build-template
Then apparently the oc client expands the parameters internally, and feeds the result into the POST:
curl -k -v -XPOST -H "Content-Type: application/json" -H "User-Agent: oc/v3.5.5.15 (linux/amd64) openshift/4b5f317" -H "Accept: application/json, */*" -H "Authorization: Bearer ...." https://example.com/oapi/v1/namespaces/openshift/processedtemplates
Run the oc command with the option --loglevel=10. This will show you what REST API calls it makes underneath and thus you can work out what you need to do to do the same thing with just the REST API. Do note that certain things may be partly done in the oc client, rather than delegating to a REST API endpoint call.
I did this, and at the very end of the output from the CLI, I saw this:
service "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
deploymentconfig "trade4-65869977-9d56-49a5-afa2-4a547df82d5c" created
When piping to oc create -f -, then, the CLI must be inspecting the resulting template and creating each object in the objects array. No evidence of those calls were outputted to my command window, other than the two "created" statements.
So to fully automate this through the REST API, we would still need to parse that objects array returned by processtemplates and POST to the appropriate endpoints, correct?
Frequently when developing with MessageHub, I find that I want to purge my development data from a topic.
How can I purge a MessageHub topic?
This question is similar to Purge Kafka Queue but differs because that question is directed at apache kafka and I'm not sure if Message Hub supports the kafka command line tools.
The only way to purge a Kafka topic from within Message Hub is to delete and recreate the topic. You can do this manually using the Web UI provided by the Message Hub service. Alternatively you can use the REST API for administering Kafka topics. The advantage of using the REST API is that it can be scripted.
The Message Hub REST API is documented in Swagger here: https://github.com/ibm-messaging/message-hub-docs/blob/master/kafka-administration-api/KafkaTopicManagement.yaml. If you are not a Swagger Guru then the REST call to delete is:
POST /admin/topics/<TOPICNAME>
You will need to specify your Message Hub API key (from VCAP_SERVICES) using the X-Auth-Token header to authenticate the request. So a sample curl implementation would look like:
curl -k -v -X DELETE -H 'Content-Type: application/json' -H 'Accept: */*' \
-H 'X-Auth-Token: yourapikeyhere' \
https://admin-endpoint-goes-here/admin/topics/<TOPICNAME>
The one gotcha is that Kafka topic deletion is asynchronous. So before you can re-create the topic, you need to make sure that the deletion process for the original topic has completed. This can be achieved by polling the following endpoint until it returns a 404 (Not Found) status code:
GET /topics/<TOPICNAME>
(Again the X-Auth-Token header must be present).
In curl:
curl -k -v -H -H 'Accept: application/json' \
-H 'X-Auth-Token: yourapikeyhere' \
https://admin-endpoint-goes-here/topics/<TOPICNAME>
To (re-)create a topic requires the following REST request (also with an X-Auth-Token):
POST /admin/topics
The body of the request contains a JSON document with parameters describing the topic to create. For example:
{
"name": "TOPICNAME",
"partitions": 2
}
In curl this would would be:
curl -k -v -H 'Content-Type: application/json' -H 'Accept: */*' \
-H 'X-Auth-Token: yourapikeyhere' \
-d '{ "name": "TOPICNAME", "partitions": 2 }' \
https://admin-endpoint-goes-here/admin/topics