I have created a sample play framework api which has one endpoint.
http://play-demo-broker.cfapps.io/say?number=20
Which just return me number that have passed.
I am able successfully deploy the service. Next want this service to Act like service broker
For same want to register this as by using below command
cf create-service-broker play-demo-broker admin admin http://play-demo-broker.cfapps.io --space-scoped
This command it giving me below error -
The service broker rejected the request. Status Code: 404 Not Found
Not sure what is causing this issue as there not much information available for Play Framework Service broker Setup.
The play framework is implemented above the akka packages. Akka rejects paths that are not implemented.
If I an not mistaken, cf create-service-broker command access the / endpoint. If you implemented only say?number=20 endpoint, then be default all other paths, such as the empty path, are rejected by Akka.
In order to open that endpoint you need to add it into the routes.
For example you can add:
GET / controllers.ControllerName.GetEmptyPath
And implement the GetEmptyPath method in ControllerName
I have a Message Hub instance on Bluemix, and am able to produce / consume messages off it. I was looking for a quick, reasonable way to browse topics / messages to see what's going on. Something along the lines of kafka-topics-ui.
I installed kafka-topics-ui locally, but could not get it to connect to Message Hub. I used the kafka-rest-url value from the MessageHub credentials in the kafka-topics-ui configuration file (env.js), but could not figure out where to provide the API key.
Alternatively, in the Bluemix UI, under Kibana, I can see log entries for creating the topic. Unfortunately, I could not see log entries for messages in the topic (perhaps I'm looking the wrong place or have wrong filters?).
My guess is I'm missing something basic. Is there a way to either:
configure a tool such as kafka-topics-ui to connect to MessageHub,
or,
browse topic messages easily?
Cheers.
According to Using the Kafka REST API on Bluemix you need an additional header in all API requests:
-H "X-Auth-Token: APIKEY"
A quick solution is to edit the topic-ui code and include your token in every request. Another solution would be to use a Chrome plugin that can inject the above header. For a more formal solution, i have opened a ticket on github
When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"
I am using Bluemix environment and Node-RED flow editor. While trying to use the feature extract node that comes built-in Node-RED for the AlchemyAPI service, I am finding it hard to use it.
I tried connecting it to the HTTP request node, HTTP response node, etc, but no result. Maybe I am not completing the connections procedure correctly?
I need this code to get Twitter news and news using AlchemyAPI news data for specific companies and also give a sentiment score to and get store in IBM HDFS.
Here is the code:
[{"id":"8bd03bb4.742fc8","type":"twitter
in","z":"5fa9e76b.a05618","twitter":"","tags":"Ashok Leyland, Tata
Communication, Welspun, HCL Info,Fortis H, JSW Steel, Unichem Lab,
Graphite India, D B Realty, Eveready Ind, Birla Corporation, Camlin
Fine Sc, Indian Economy, Reserve Bank of India, Solar Power,
Telecommunication, Telecom Regulatory Authority of
India","user":"false","name":"Tweets","topic":"tweets","x":93,"y":92,"wires":[["f84ebc6a.07b14"]]},{"id":"db13f5f.f24ec08","type":"ibm
hdfs","z":"5fa9e76b.a05618","name":"Dec12Alchem","filename":"/12dec_alchem","appendNewline":true,"overwriteFile":false,"x":564,"y":226,"wires":[]},{"id":"4a1ed314.b5e12c","type":"debug","z":"5fa9e76b.a05618","name":"","active":true,"console":"false","complete":"false","x":315,"y":388,"wires":[]},{"id":"f84ebc6a.07b14","type":"alchemy-feature-extract","z":"5fa9e76b.a05618","name":"TrailRun","page-image":"","image-kw":"","feed":true,"entity":true,"keyword":true,"title":true,"author":"","taxonomy":true,"concept":true,"relation":"","pub-date":"","doc-sentiment":true,"x":246,"y":160,"wires":[["c0d3872.f3f2c78"]]},{"id":"c0d3872.f3f2c78","type":"function","z":"5fa9e76b.a05618","name":"To
mark tweets","func":"msg.payload={tweet:
msg.payload,score:msg.features};\nreturn
msg;\n","outputs":1,"noerr":0,"x":405,"y":217,"wires":[["db13f5f.f24ec08","4a1ed314.b5e12c"]]},{"id":"4181cf8.fbe7e3","type":"http
request","z":"5fa9e76b.a05618","name":"News","method":"GET","ret":"obj","url":"https://gateway-a.watsonplatform.net/calls/data/GetNews?apikey=&outputMode=json&start=now-1d&end=now&count=1&q.enriched.url.enrichedTitle.relations.relation=|action.verb.text=acquire,object.entities.entity.type=Company|&return=enriched.url.title","x":105,"y":229,"wires":[["f84ebc6a.07b14"]]},{"id":"53cc794e.ac3388","type":"inject","z":"5fa9e76b.a05618","name":"GetNews","topic":"News","payload":"","payloadType":"string","repeat":"","crontab":"","once":false,"x":75,"y":379,"wires":[["4181cf8.fbe7e3"]]}]
First you have to bind an Alchemy service instance to your node-red application.
Then you can develop your application, here is an example using the http and Feature Extract nodes:
Here is the node flow for this basic sample if you want to try:
[{"id":"e191029.f1e6f","type":"function","z":"2fc2a93f.d03d56","name":"","func":"msg.payload = msg.payload.url;\nreturn msg;","outputs":1,"noerr":0,"x":276,"y":202,"wires":[["12082910.edf7d7"]]},{"id":"12082910.edf7d7","type":"alchemy-feature-extract","z":"2fc2a93f.d03d56","name":"","page-image":"","image-kw":"","feed":"","entity":true,"keyword":true,"title":true,"author":true,"taxonomy":true,"concept":true,"relation":true,"pub-date":true,"doc-sentiment":true,"x":484,"y":203,"wires":[["8a3837f.f75c7c8","d164d2af.2e9b3"]]},{"id":"8a3837f.f75c7c8","type":"debug","z":"2fc2a93f.d03d56","name":"Alchemy Debug","active":true,"console":"true","complete":"true","x":736,"y":156,"wires":[]},{"id":"fb988171.04678","type":"http in","z":"2fc2a93f.d03d56","name":"Test Alchemy","url":"/test_alchemy","method":"get","swaggerDoc":"","x":103.5,"y":200,"wires":[["e191029.f1e6f"]]},{"id":"d164d2af.2e9b3","type":"http response","z":"2fc2a93f.d03d56","name":"End Test Alchemy","x":749,"y":253,"wires":[]}]
You can use curl to test it, for example:
curl -G http://yourapp.mybluemix.net/test_alchemy?url=<your url here>
or use your browser as well:
http://yourapp.mybluemix.net/test_alchemy?url=http://myurl_to_test_alchemy
You can see the results in the node-red debug tab or your can see it in application logs:
$ cf logs yourapp --recent
I have configured spring cloud config which picks up property from Github. If I post to /refresh, I am also able to get the updated value in my application.
Now I want to get properties updated automatically. That means I don't want to hit refresh API to get the changes reflected in my application from Github property file to my application.
Do I need to implement Rabbitmq and cloud bus for it or there is any other simple way to do it?
Also there document says that we need to add a dependency on the spring-cloud-config-monitor library for push notification.
http://projects.spring.io/spring-cloud/spring-cloud.html#_push_notifications_and_spring_cloud_bus
But I did not find any such dependency in maven to be added. Not sure if my understanding is wrong. Please help.
You would need a Config server with Spring Cloud Bus and RabbitMQ (or Kafka or Redis) support.
RabbitMQ with the following exchange:
name: springCloudBus
type: topic
durable: true
autoDelete: false
internal: false
The config server would send data to the topic once it receives push events from Git (Github, Bitbucket, GitLab) via a webhook to http://<config-server>/monitor
And a client application with Config and RabbitMQ libraries, subscribed to the previous exchange to receive messages of the properties that need to be refreshed.
More could be found in my blog at: http://tech.asimio.net/2017/02/02/Refreshable-Configuration-using-Spring-Cloud-Config-Server-Spring-Cloud-Bus-RabbitMQ-and-Git.html with a brief explanation of the configuration, logs and full source code for the Config server and client app.
They are not generally available yet. You need to add http://repo.spring.io/milestone/ as a maven repository and use a milestone release.