How to use alchemyAPI news data in Bluemix Node-RED? - ibm-cloud

I am using Bluemix environment and Node-RED flow editor. While trying to use the feature extract node that comes built-in Node-RED for the AlchemyAPI service, I am finding it hard to use it.
I tried connecting it to the HTTP request node, HTTP response node, etc, but no result. Maybe I am not completing the connections procedure correctly?
I need this code to get Twitter news and news using AlchemyAPI news data for specific companies and also give a sentiment score to and get store in IBM HDFS.
Here is the code:
[{"id":"8bd03bb4.742fc8","type":"twitter
in","z":"5fa9e76b.a05618","twitter":"","tags":"Ashok Leyland, Tata
Communication, Welspun, HCL Info,Fortis H, JSW Steel, Unichem Lab,
Graphite India, D B Realty, Eveready Ind, Birla Corporation, Camlin
Fine Sc, Indian Economy, Reserve Bank of India, Solar Power,
Telecommunication, Telecom Regulatory Authority of
India","user":"false","name":"Tweets","topic":"tweets","x":93,"y":92,"wires":[["f84ebc6a.07b14"]]},{"id":"db13f5f.f24ec08","type":"ibm
hdfs","z":"5fa9e76b.a05618","name":"Dec12Alchem","filename":"/12dec_alchem","appendNewline":true,"overwriteFile":false,"x":564,"y":226,"wires":[]},{"id":"4a1ed314.b5e12c","type":"debug","z":"5fa9e76b.a05618","name":"","active":true,"console":"false","complete":"false","x":315,"y":388,"wires":[]},{"id":"f84ebc6a.07b14","type":"alchemy-feature-extract","z":"5fa9e76b.a05618","name":"TrailRun","page-image":"","image-kw":"","feed":true,"entity":true,"keyword":true,"title":true,"author":"","taxonomy":true,"concept":true,"relation":"","pub-date":"","doc-sentiment":true,"x":246,"y":160,"wires":[["c0d3872.f3f2c78"]]},{"id":"c0d3872.f3f2c78","type":"function","z":"5fa9e76b.a05618","name":"To
mark tweets","func":"msg.payload={tweet:
msg.payload,score:msg.features};\nreturn
msg;\n","outputs":1,"noerr":0,"x":405,"y":217,"wires":[["db13f5f.f24ec08","4a1ed314.b5e12c"]]},{"id":"4181cf8.fbe7e3","type":"http
request","z":"5fa9e76b.a05618","name":"News","method":"GET","ret":"obj","url":"https://gateway-a.watsonplatform.net/calls/data/GetNews?apikey=&outputMode=json&start=now-1d&end=now&count=1&q.enriched.url.enrichedTitle.relations.relation=|action.verb.text=acquire,object.entities.entity.type=Company|&return=enriched.url.title","x":105,"y":229,"wires":[["f84ebc6a.07b14"]]},{"id":"53cc794e.ac3388","type":"inject","z":"5fa9e76b.a05618","name":"GetNews","topic":"News","payload":"","payloadType":"string","repeat":"","crontab":"","once":false,"x":75,"y":379,"wires":[["4181cf8.fbe7e3"]]}]

First you have to bind an Alchemy service instance to your node-red application.
Then you can develop your application, here is an example using the http and Feature Extract nodes:
Here is the node flow for this basic sample if you want to try:
[{"id":"e191029.f1e6f","type":"function","z":"2fc2a93f.d03d56","name":"","func":"msg.payload = msg.payload.url;\nreturn msg;","outputs":1,"noerr":0,"x":276,"y":202,"wires":[["12082910.edf7d7"]]},{"id":"12082910.edf7d7","type":"alchemy-feature-extract","z":"2fc2a93f.d03d56","name":"","page-image":"","image-kw":"","feed":"","entity":true,"keyword":true,"title":true,"author":true,"taxonomy":true,"concept":true,"relation":true,"pub-date":true,"doc-sentiment":true,"x":484,"y":203,"wires":[["8a3837f.f75c7c8","d164d2af.2e9b3"]]},{"id":"8a3837f.f75c7c8","type":"debug","z":"2fc2a93f.d03d56","name":"Alchemy Debug","active":true,"console":"true","complete":"true","x":736,"y":156,"wires":[]},{"id":"fb988171.04678","type":"http in","z":"2fc2a93f.d03d56","name":"Test Alchemy","url":"/test_alchemy","method":"get","swaggerDoc":"","x":103.5,"y":200,"wires":[["e191029.f1e6f"]]},{"id":"d164d2af.2e9b3","type":"http response","z":"2fc2a93f.d03d56","name":"End Test Alchemy","x":749,"y":253,"wires":[]}]
You can use curl to test it, for example:
curl -G http://yourapp.mybluemix.net/test_alchemy?url=<your url here>
or use your browser as well:
http://yourapp.mybluemix.net/test_alchemy?url=http://myurl_to_test_alchemy
You can see the results in the node-red debug tab or your can see it in application logs:
$ cf logs yourapp --recent

Related

InfluxDB http calls sending credentials (username & password) in URL as query params

For a sample project for Weather service I needed to store time series data. This is the first time I am using any time-series database. I did some reading on those and their comparison and found that InfluxDB is open-source and is one of the best, so decided to use that.
For my PoC I installed it locally on my machine and connecting it from my application. However, when I see the logs for various queries run against InfluxDB, found that it makes http calls to InfluxDB and it passes the username and password in the Query params in URL. This certainly seems like bad practice to pass credentials as as plain text in the URL while making the http call. Can someone comment why is it designed like this and is it supposed to be like this in real world scenario as well?
Logs:
2019-07-19 12:01:00.304 INFO 69709 --- [pool-1-thread-1] okhttp3.OkHttpClient : --> POST http://127.0.0.1:8086/write?u=admin&p=admin&db=weatherdata&rp=defaultPolicy&precision=n&consistency=one (78-byte body)
2019-07-19 13:48:28.461 INFO 69709 --- [nio-8080-exec-9] okhttp3.OkHttpClient : --> GET http://127.0.0.1:8086/query?u=admin&p=admin&db=weatherdata&q=Select+*+from+weather
2019-07-19 13:48:28.530 INFO 69709 --- [nio-8080-exec-9] okhttp3.OkHttpClient : <-- 200 OK http://127.0.0.1:8086/query?u=admin&p=admin&db=weatherdata&q=Select+*+from+weather (68ms, unknown-length body)
InfluxDB supports HTTP Basic Auth where username and password are passed via HTTP auth headers instead of the URL. I suspect you just need to configure your client to do that instead of using the URL parameters. Credentials are still in plaintext, but I think if you set up HTTPS, Basic Auth is secure-ish.
In general I don't think the Influx Devs expect InfluxDB to be a standalone, public-facing service. Instead, they expect that you're going to use InfluxDB as a backend, and then use something like Chronograf (which is their own visualization tool) or Grafana as a front end. So if they're going to spend time on more sophisticated authentication protocols, they're going to do it on the front end side.
The expectation would be that the front end and back end run on the same network, and communications between them can be secured via network segmentation.

Spark monitoring REST API in YARN cluster mode

When Spark is deployed in YARN cluster mode, how should I issue the Spark monitoring REST API calls http://spark.apache.org/docs/latest/monitoring.html ?
Does YARN have an API that takes the REST call for example (I already know the app-id)
http://localhost:4040/api/v1/applications/[app-id]/jobs
, proxies it to the correct driver port, and returns the JSON back to me? By "me" I mean my client.
Assume (or already by design) I cannot directly talk to the driver machine due to security reasons.
pls have a look at spark docs
- REST API
Yes with the latest api its available.
By this article
It turns out there is a third surprisingly easy option which is not documented. Spark has a hidden REST API which handles application submission, status checking and cancellation.
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
These are the other options available..
Livy jobserver
Submit Spark jobs remotely to an Apache Spark cluster Linux using Livy
Other options include
Triggering spark jobs with REST
This is what worked for me,
In yarn resource manager UI, click on link of the "application manager" for the running application and note the URL that it directs to
For me the link was something like
http://RM:20888/proxy/application_1547506848892_0002/
Append "api/v1/applications/application_1547506848892_0002" to the URL for the api.
For above case the api url is
curl "http://RM:20888/proxy/application_1547506848892_0002/api/v1/applications/application_1547506848892_0002"

Can I use multiple chaincode using a single Bluemix blockchain service?

I'm new to IBM Bluemix Blockchain service. I wonder if I can create multiple chain code. This is because I got the following error.
! looks like an error loading the chaincode or network, app will fail
{ name: 'register() error',
code: 401,
details: { Error: 'rpc error: code = 13 desc = \'server closed the stream without sending trailers\'' } }
Here is what I did:
Create a blockchain serivce, and nameded as 'blockchain'.
Run cp-web example => Success
Run marbles demo using existing blockchain service ('blockchain'). => Gives me the above error
Newly create a blockchain service, names as 'mbblochchain'
Repush marbles demo with new service name => Success
So I wonder if I can put multiple chaincode into peer's network or not. It is likely I may be misunderstanding how it works or should behave.
Yes you can deploy multiple chaincodes on the same network. The issue you are having is because each app is registering users differently.
Currently only 1 username (aka enrollID) can be registered against 1 peer. If you try to register the same username against two peers, the 2nd registration will fail. This is what is happening to you.
The Bluemix blockchain service is returning two type1 usernames (type1 is the type of enrollID these apps want to use).
cp-web will register the first and second enrollID against peer vp1
marbles will register the first enrollID against vp1 and the 2nd enrollID against vp2
Therefore when you ran marbles after cp-web it tried to register the 2nd enrollID against vp2 when it had already been registered with vp1. Thus giving you an error.
In general, you can deploy multiple chaincode apps to a single instance of the Bluemix Blockchain service and more broadly speaking multiple chaincode apps to a single peer network.
Were you deploying the web apps directly using "cf push" and trying to bind to existing Blockchain service instance or where you trying to use the "deploy to Bluemix" functionality?

How to integrate Ambari REST API for cluster monitoring examples

I have a use case to integrate and Import Ambari alerts that's getting generated in Ambari Web interface , to the centralized monitoring environment we are using for managing clusters.I am using HDP . Do we have any detailed documentation/Steps/ about how to do this. Here are some example that I want to accomplish
How to make a REST API call to see if HDFS file system if filled and uses is more than 90 % or how to check if if one of service is down like HDFS/HBASE is not working and have raised alarm in Ambari GUI .
Checkout this page for links to the Ambari REST API for alerts:
https://cwiki.apache.org/confluence/display/AMBARI/Alerts
Also checkout slides 4-20 in this SlideShare, particularly slide 13 highlights the Alerts REST API:
http://www.slideshare.net/hortonworks/apache-ambari-whats-new-in-200

Trouble adding a new service

I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.