automatic pulling REST API data to visualize it in Apache Superset - rest

I work in a large enterprise and have a project to build some custom automated dashboards for our IT department, the small amount of data needs to be fetched only from the REST API endpoints. This process needs to be fully automated and there is not enough time to build a custom API wrapper. For this approach I was going to use Apache Airflow + Apache Superset tools. I have been googling for a couple of days for more easier open source solution than the Apache Airflow to move data from the REST API endpoints to visualize it in Superset. Please share your experience what would you choose instead of the Apache Airflow?

I chose to go with fhe following solution:
Apache Airflow + PostgreSQL + Grafana (instead of a Superset, because in Grafana you can actually create a drill-down option using a workaround)

Related

Integration and data upload - MuleSoft CloudHub to Tableau Server

I am using MuleSoft CloudHub with Runtime v4.4 to upload CSV data to Tableau Server. The documentation page https://docs.mulesoft.com/tableau-specialist-connector/1.1/ confirms that its not possible to use Hyper Configuration and its operations in CloudHub because CloudHub does not allow executing external code due to security reasons. That is why I am trying to use REST configurations and it's available operations.
With all my attempts, I am able to connect to Tableau server and successfully perform simple operations like Initial file upload, Query project, Append to file upload, etc. But, these operations does not help to publish my CSV content to Tableau. Publish workbook operation also requires file content in *.twbx format, and I am not sure how to convert CSV/JSON/XML to TWBX content type.
I have referred some websites and MuleSoft technical videos where HTTPS connection can be used to upload data to Tableau but those examples are using *.hyper file from classpath.
So, basically I am stuck at 2 different paths:
How can I transform CSV content to hyper file content in Mule flow. If this transform is possible then I can use HTTPS connection to Tableau and upload my data.
Using MuleSoft Tableau connector v1.1 and with REST configurations, is it possible to upload data to Tableau?
If there is any other solution, then I am happy to change my implementation strategy. Please can somebody guide me to correct direction?

Containers (Kubernetes) vs Web service (REST APIs)

I have a single screen desktop application developed in Java. It is a tool to convert files, given a file in .abc format, the tool converts it to .xyz format. Basically the tool works offline and acts as a translator to convert file from one form to another.
So now, to improve the infrastructure, there are discussions to move the tool to Kubernetes or to provide REST services for the file conversion. I completely have no idea about the containers nor the REST APIs as I am a front-end developer.
More about the tool, as I told earlier, the tool is a single page application, very light doing very minimal job, totally used by 200 users approximately. So, this being the shape and size of the application, which one would be the best approach to go with and why? Basically, I am looking for a short evaluation report of Kubernetes vs REST service and architecture recommendation with reasons.
Currently your application is a standalone application which is quite an old concept.
I can mention high-level changes needs to be done when your file conversion logic would be exposed over Rest Api in Kubernetes world.
you can go through one by one following mentioned areas to get a better understanding design-wise:
java code would be a backend code and its public methods that take inputs from UI actions will be exposed over rest API.
There are multiple rest API's (jersey, rest easy, etc or spring/spring-boot framework also provides rest API support) that you can go through any of them to get an understanding.
once your backend is exposed over the rest API then it needs to be containerized means your backend will be running under the container. Can go through docker documentation and can build one sample containerized app. There is huge material present in this area.
once your backend is containerized then it will be installed in a Kubernetes cluster
Kubernetes is basically a container orchestration tool and it's quite a wide thing. you can through its official documentation for basic understanding.
SPA will be running on a client machine like today also you are able to launch from your desktop but it will communicate with the Kubernetes cluster where your application is presently packaged in a container.
References:
docker :
https://docs.docker.com/
Kubernetes :
https://kubernetes.io/

How to connect Kafka to Thingsboard Platform

I want to activate the Kafka Spark pipeline for the Thingsboard platform (community edition).
As per the mentioned Stack question "Couldn't able to find plugins in ThingsBoard 2.0.3 Home screen"
It's been said that we can do it via Rule chains itself since the plugin section has been removed, but I am not able to understand how to configure it using rule chains. I am not able to get the complete documentation to configure Kafka via rule chains. So need help on that.
I figured it out. By using this link it can be done easily "https://thingsboard.io/docs/samples/analytics/kafka-streams/"
The thing is that using the Thingsboard CE we can get data into Kafka-topic. However, to fetch data from Kafka you will need to have TB Professional Edition integration.
The alternate option to Thingsboard PE is to write your own REST API script to push the insights back to ThingsBoard.

REST endpoint registration and bootstrap(Creating range-index) using U Deploy

I have my code in Git repository. I am using UDeploy to deploy my code into MarkLogic environment. I can able to move all my modules successfully but facing two problems
1. Creating New indexes
2. REST endpoint creation
Please let me know if there is anyway to implement these two
For creating indexes, I have tried to do it using API functions(admin:database-range-element-index()) and I have successful in that part. But is there any way to do it from UDeploy or DevOps.
For register REST endpoint I couldn't able to find anyway to try.
Have you looked at MarkLogic's REST Management APIs - https://docs.marklogic.com/REST/management. In particular, see if https://docs.marklogic.com/REST/POST/manage/v2/databases will help you create indexes via REST management APIs.
The most common way to deploy MarkLogic code & configuration is ml-gradle, a plugin to the widely used gradle tool. ml-gradle uses MarkLogic's Management API, mentioned by Ganesh, and is scriptable.

What's the anatomy of a Bluemix/Cloud Foundry node red project?

There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.