Frequently when developing with MessageHub, I find that I want to purge my development data from a topic.
How can I purge a MessageHub topic?
This question is similar to Purge Kafka Queue but differs because that question is directed at apache kafka and I'm not sure if Message Hub supports the kafka command line tools.
The only way to purge a Kafka topic from within Message Hub is to delete and recreate the topic. You can do this manually using the Web UI provided by the Message Hub service. Alternatively you can use the REST API for administering Kafka topics. The advantage of using the REST API is that it can be scripted.
The Message Hub REST API is documented in Swagger here: https://github.com/ibm-messaging/message-hub-docs/blob/master/kafka-administration-api/KafkaTopicManagement.yaml. If you are not a Swagger Guru then the REST call to delete is:
POST /admin/topics/<TOPICNAME>
You will need to specify your Message Hub API key (from VCAP_SERVICES) using the X-Auth-Token header to authenticate the request. So a sample curl implementation would look like:
curl -k -v -X DELETE -H 'Content-Type: application/json' -H 'Accept: */*' \
-H 'X-Auth-Token: yourapikeyhere' \
https://admin-endpoint-goes-here/admin/topics/<TOPICNAME>
The one gotcha is that Kafka topic deletion is asynchronous. So before you can re-create the topic, you need to make sure that the deletion process for the original topic has completed. This can be achieved by polling the following endpoint until it returns a 404 (Not Found) status code:
GET /topics/<TOPICNAME>
(Again the X-Auth-Token header must be present).
In curl:
curl -k -v -H -H 'Accept: application/json' \
-H 'X-Auth-Token: yourapikeyhere' \
https://admin-endpoint-goes-here/topics/<TOPICNAME>
To (re-)create a topic requires the following REST request (also with an X-Auth-Token):
POST /admin/topics
The body of the request contains a JSON document with parameters describing the topic to create. For example:
{
"name": "TOPICNAME",
"partitions": 2
}
In curl this would would be:
curl -k -v -H 'Content-Type: application/json' -H 'Accept: */*' \
-H 'X-Auth-Token: yourapikeyhere' \
-d '{ "name": "TOPICNAME", "partitions": 2 }' \
https://admin-endpoint-goes-here/admin/topics
Related
While trying to use encryption/decryption feature of spring cloud config server with Pivotal Cloud Foundry's p-config-server service which is configured with a symmetric key for encryption, I am getting a 403 forbidden response which calling /decrypt on config server .
I am able to call the /encrypt endpoint successfully for encrypting values using below sample curl -
curl --location --request POST 'https://config-xxxx.apps.xxx.com/encrypt' \
--header 'Authorization: bearer <cf oauth_token here>' \
--header 'Content-Type: text/plain' \
--data-raw 'sample data'
But when trying to decrypt those values using /decrypt, I am getting 403 Forbidden error -
{
"error": "access_denied",
"error_description": "invalid issuer"
}
Sample curl for decryption -
curl --location --request POST 'https://config-xxxx.apps.xxx.com/decrypt' \
--header 'Authorization: bearer <cf oauth_token here>' \
--header 'Content-Type: text/plain' \
--data-raw '<encrypted value from previous step>'
In pivotal's config server documentation though there are reference to /encrypt but nothing related to /decrypt pivotal config server
Any pointers ?
In the Pivotal/Tanzu Spring Cloud Services (commercial product), the /encrypt API is exposed to anyone with the admin scope or to anyone that is a Space Developer in the space of the service.
The /decrypt endpoint is not exposed specifically, so you're not allowed to access with the same credentials, hence why you're seeing a 403.
My quick read of the code is that you'd need a token with scope config_server_<guide>.read to access that endpoint. Where <guid> is the config server's service guide (run cf service --guid <name> to obtain the guide).
To make that work, you'd need to get a token from the bound service or a service key, the latter is easier:
Run cf create-service-key <service_instance> decrypt-key
Run cf service-key <service_instance> decrypt-key
Run export TOKEN=$(curl -vv <access_token_uri> -d 'grant_type=client_credentials' -d 'client_id=<client_id>' -d 'client_secret=<client_secret>' | jq -r .access_token) where the values in <...> are from the output of #2.
Run curl -vv '<uri>/decrypt' -H "Authorization: bearer $TOKEN" -H 'Content-type: text/plain' -d '<encrypted-value>'
These depend on a Bash shell. You can do them on Windows, but the commands will vary. It also uses jq to make extracting the token easier. You could split the command in step #3 into two steps, fetching with curl and manually exporting TOKEN.
I'm trying out Rest Proxy in Kafka.
When I type the following url in my browser http://192.168.0.30:8082/topics,
I get the expected results :
["__confluent.support.metrics","_confluent-command","_confluent-controlcenter-5-
2-2-1-MetricsAggregateStore-changelog","_confluent-controlcenter-5-2-2-1-actual-
group-consumption-rekey","_confluent-controlcenter-5-2-2-1-expected-group-
consumption-rekey","_confluent-controlcenter-5-2-2-1-metrics-trigger-measurement-
rekey","_confluent-ksql-default__command_topic","_confluent-metrics","_confluent-
monitoring","_schemas","connect-configs","connect-offsets","connect-
statuses","default_ksql_processing_log","test","test1"]
My question : I try not to use CURL. I have the following CURL command examples . If I want to use only my browser like above, how can I change it?
I tried this, but... (How can I consume my topic test?)
**Just an example from a document : **
# Create a consumer for binary data, starting at the beginning of the topic's
# log. Then consume some data from a topic.
$ curl -X POST -H "Content-Type: application/vnd.kafka.v1+json" \
--data '{"id": "my_instance", "format": "binary", "auto.offset.reset": "smallest"}' \
http://localhost:8082/consumers/my_binary_consumer
{"instance_id":"my_instance","base_uri":"http://localhost:8082/consumers/my_binar
y_consumer/instances/my_instance"}
$ curl -X GET -H "Accept: application/vnd.kafka.binary.v1+json" \
http://localhost:8082/consumers/my_binary_consumer/instances/my_instance/topics/test
[{"key":null,"value":"S2Fma2E=","partition":0,"offset":0}]
Browsers can only issue GET requests.
You could use tools like Postman or Insomnia to issue other HTTP requests.
(For further reference)
I used Postman in order to use REST Proxy for Kafka.
1. I subscribed to a topic test.
$ curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data
'{"topics":["test"]}' \
http://192.168.0.30:8082/consumers/my_json_consumer/instances/my_consumer_instanc
e/subscription
( I changed this CURL to fit into Postman. )
2. Then, I consumed the topic.
$ curl -X GET -H "Accept: application/vnd.kafka.json.v2+json" \
http://192.168.0.30:8082/consumers/my_json_consumer/instances/my_consumer_instanc
e/records
(I changed this CURL to fit into Postman.)
We would like to get the metadata out of the file system. Is there anything like fsImage which stores such a medata information? We used following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS'
But this gives only lists only one level metadata. As per HDFS Api documentation, tried using following command:
curl -i -X GET -H 'Authorization: Bearer <REDACTED>' 'https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS_BATCH&startAfter=<CHILD> #added code style
But it gives error that it is not implemented.
I checked with Azur support and they mentioned that not all the methods provided by Hadoop are implemented. So in my case LISTSTATUS_BATCH is not implmented.
I want to change the number of replications (pods) for a Deployment using the Kubernetes API (v1beta1).
For now I'm able to increase the replicas from CLI using the command:
kubectl scale --replicas=3 deployment my-deployment
In the Kubernetes API documentation it's mention that there is a PUT request to do the same
PUT /apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale
but there is no example of how to do it.
I'm not sure what to send in the request body in order to perform the update.
the easiest way is to retrieve the actual data first with:
GET /apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale
This will give you an yaml or json object which you can modify and send back with the PUT request.
With curl the roundtrip look like this:
API_URL="http://kubernetes:8080/apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale"
curl -H 'Accept: application/json' $API_URL > scale.json
# edit scale.json
curl -X PUT -d#scale.json -H 'Content-Type: application/json' $API_URL
Alternatively you could just use a PATCH call:
PAYLOAD='[{"op":"replace","path":"/spec/replicas","value":"3"}]'
curl -X PATCH -d$PAYLOAD -H 'Content-Type: application/json-patch+json' $API_URL
The previous solution didn't work for me on kubernetes 1.14. I had to use a different API endpoint.
Here's the full example:
#!/bin/sh
set -e
NUMBER_OF_REPLICAS="$1"
CURRENT_NAMESPACE="$2"
DEPLOYMENT_NAME="$3"
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
KUBE_CACRT_PATH="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
PAYLOAD="{\"spec\":{\"replicas\":$NUMBER_OF_REPLICAS}}"
curl --cacert $KUBE_CACRT_PATH \
-X PATCH \
-H "Content-Type: application/strategic-merge-patch+json" \
-H "Authorization: Bearer $KUBE_TOKEN" \
--data "$PAYLOAD" \
https://$KUBERNETES_SERVICE_HOST/apis/apps/v1/namespaces/$CURRENT_NAMESPACE/deployments/$DEPLOYMENT_NAME
Note that the $KUBERNETES_SERVICE_HOST is automatically set by kubernetes inside the pods.
There appears to be an endpoint on the Wunderlist API for updating the wunderlist user (at https://a.wunderlist.com/api/v1/user) which does not appear in the official docs
User information can be updated by sending a PUT request to it, e.g. :
curl \
-H "X-Access-Token: $WL_ACCESS_TOKEN" \
-H "X-Client-ID: $WL_CLIENT_ID" \
-H "Content-Type: application/json" \
-XPUT -d '{"revision":1234,"name":"some-name"}' \
https://a.wunderlist.com/api/v1/user
Assuming the data is valid, this returns with HTTP 200.
Is this endpoint supported?