UI console to browse topics on Message Hub - ibm-cloud

I have a Message Hub instance on Bluemix, and am able to produce / consume messages off it. I was looking for a quick, reasonable way to browse topics / messages to see what's going on. Something along the lines of kafka-topics-ui.
I installed kafka-topics-ui locally, but could not get it to connect to Message Hub. I used the kafka-rest-url value from the MessageHub credentials in the kafka-topics-ui configuration file (env.js), but could not figure out where to provide the API key.
Alternatively, in the Bluemix UI, under Kibana, I can see log entries for creating the topic. Unfortunately, I could not see log entries for messages in the topic (perhaps I'm looking the wrong place or have wrong filters?).
My guess is I'm missing something basic. Is there a way to either:
configure a tool such as kafka-topics-ui to connect to MessageHub,
or,
browse topic messages easily?
Cheers.

According to Using the Kafka REST API on Bluemix you need an additional header in all API requests:
-H "X-Auth-Token: APIKEY"
A quick solution is to edit the topic-ui code and include your token in every request. Another solution would be to use a Chrome plugin that can inject the above header. For a more formal solution, i have opened a ticket on github

Related

Subject does not have subject-level compatibility configured

We use Kafka, Kafka connect and Schema-registry in our stack. Version is 2.8.1(Confluent 6.2.1).
We use Kafka connect's configs(key.converter and value.converter) with value: io.confluent.connect.avro.AvroConverter.
It registers a new schema for topics automatically. But there's an issue, AvroConverter doesn't specify subject-level compatibility for a new schema
and the error appears when we are trying to get config for the schema via REST API /config: Subject 'schema-value' does not have subject-level compatibility configured
If we specify the request parameter defaultToGlobal then global compatibility is returned. But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ.
How can I specify subject-level compatibility when registering a new schema via AvroConverter?
Last I checked, the only properties that can be provided to any of the Avro serializer configs that affect the Registry HTTP client are the url, whether to auto-register, and whether to use the latest schema version.
There's no property (or even method call) that sets either the subject level or global config during schema registration
You're welcome to check out the source code to verify this
But it doesn't work for us because we cannot specify it in the request. We are using 3rd party UI: AKHQ
Doesn't sound like a Connect problem. Create a PR for AKHQ project to fix the request
As of 2021-10-26, I used akhq 0.18.0 jar and confluent-6.2.0, the schema registry in akhq is working fine.
Note: I also used confluent-6.2.1, seeing exactly the same error. So, you may want to switch back to 6.2.0 to give a try.
P.S: using all only for my local dev env, VirtualBox, Ubuntu.
#OneCricketeer is correct.
There is no possibility to specify subject-level compatibility in AvroConverter unfortunately.
I see only two solutions:
Override AvroConverter to add property and functionality to send an additional request to API /config/{subject} after registering the schema.
Contribute to AKHQ to support defaultToGlobal parameter. But in this case, we also need to backport schema-registry RestClient. Github issue
The second solution is more preferable till the user would specify the compatibility level in the settings of the converter. Without this setting in the native AvroConverter, we have to use the custom converter for every client who writes a schema. And it makes a lot of effort.
For me, it looks strange why the client cannot set up the compatibility at the moment of registering the schema and has to use a different request for it.

Spark monitoring REST API in YARN cluster mode

When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"

spring cloud auto refresh config server property

I have configured spring cloud config which picks up property from Github. If I post to /refresh, I am also able to get the updated value in my application.
Now I want to get properties updated automatically. That means I don't want to hit refresh API to get the changes reflected in my application from Github property file to my application.
Do I need to implement Rabbitmq and cloud bus for it or there is any other simple way to do it?
Also there document says that we need to add a dependency on the spring-cloud-config-monitor library for push notification.
http://projects.spring.io/spring-cloud/spring-cloud.html#_push_notifications_and_spring_cloud_bus
But I did not find any such dependency in maven to be added. Not sure if my understanding is wrong. Please help.
You would need a Config server with Spring Cloud Bus and RabbitMQ (or Kafka or Redis) support.
RabbitMQ with the following exchange:
name: springCloudBus
type: topic
durable: true
autoDelete: false
internal: false
The config server would send data to the topic once it receives push events from Git (Github, Bitbucket, GitLab) via a webhook to http://<config-server>/monitor
And a client application with Config and RabbitMQ libraries, subscribed to the previous exchange to receive messages of the properties that need to be refreshed.
More could be found in my blog at: http://tech.asimio.net/2017/02/02/Refreshable-Configuration-using-Spring-Cloud-Config-Server-Spring-Cloud-Bus-RabbitMQ-and-Git.html with a brief explanation of the configuration, logs and full source code for the Config server and client app.
They are not generally available yet. You need to add http://repo.spring.io/milestone/ as a maven repository and use a milestone release.

Finding latest TeamCity Backup via REST API

I found plenty of information and example about triggering TeamCity 8.1.2 backups via the REST API.
But leaving the backup files on the same server is pretty useless for disaster recovery.
So I'm looking for a way to copy over the generated backup file to another location.
My question is about finding the name of the latest available backup file via the REST API -
The Web GUI includes this information under "Last Backup Report" under the "Backup" page of the Server Administration.
I've dug through https://confluence.jetbrains.com/display/TCD8/REST+API#RESTAPI-DataBackup and the /httpAuth/app/rest/application.wadl on my server. I didn't find any mention of a way to get this info through the REST API.
I also managed to trigger a backup with a hope that perhaps the response gives this information, but it's not there - the response body is empty and the headers don't include this info.
Right now I intend to fetch the HTML page and extract this information from there, but this feels very hackish and fragile (the structure of the web page could change any time).
Is there a recommended way to get this information automatically?
Thanks.
JetBrains support got back to me with the right answer - I should use a POST method, not GET, even if the request body is empty.
Here is an example of a working request:
curl -u user:password --request POST http://localhost:8111/httpAuth/app/rest/server/backup?includeConfigs=true'&'includeDatabase=true'&'fileName=testBackup
And the response to that contains a plain file name in text: testBackup_20150108_141924.zip

Trouble adding a new service

I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.