Is it possible to use a live connection by using the web data connector in Tableau? Or do I have always build an extract? Currently I am using the trial version and in that version the live connection option is greyed out.
The web data connector always creates an extract. See this link for more details. https://onlinehelp.tableau.com/current/api/wdc/en-us/help.htm#WDC/wdc_phases.htm
Related
I am using MuleSoft CloudHub with Runtime v4.4 to upload CSV data to Tableau Server. The documentation page https://docs.mulesoft.com/tableau-specialist-connector/1.1/ confirms that its not possible to use Hyper Configuration and its operations in CloudHub because CloudHub does not allow executing external code due to security reasons. That is why I am trying to use REST configurations and it's available operations.
With all my attempts, I am able to connect to Tableau server and successfully perform simple operations like Initial file upload, Query project, Append to file upload, etc. But, these operations does not help to publish my CSV content to Tableau. Publish workbook operation also requires file content in *.twbx format, and I am not sure how to convert CSV/JSON/XML to TWBX content type.
I have referred some websites and MuleSoft technical videos where HTTPS connection can be used to upload data to Tableau but those examples are using *.hyper file from classpath.
So, basically I am stuck at 2 different paths:
How can I transform CSV content to hyper file content in Mule flow. If this transform is possible then I can use HTTPS connection to Tableau and upload my data.
Using MuleSoft Tableau connector v1.1 and with REST configurations, is it possible to upload data to Tableau?
If there is any other solution, then I am happy to change my implementation strategy. Please can somebody guide me to correct direction?
I want to activate the Kafka Spark pipeline for the Thingsboard platform (community edition).
As per the mentioned Stack question "Couldn't able to find plugins in ThingsBoard 2.0.3 Home screen"
It's been said that we can do it via Rule chains itself since the plugin section has been removed, but I am not able to understand how to configure it using rule chains. I am not able to get the complete documentation to configure Kafka via rule chains. So need help on that.
I figured it out. By using this link it can be done easily "https://thingsboard.io/docs/samples/analytics/kafka-streams/"
The thing is that using the Thingsboard CE we can get data into Kafka-topic. However, to fetch data from Kafka you will need to have TB Professional Edition integration.
The alternate option to Thingsboard PE is to write your own REST API script to push the insights back to ThingsBoard.
I have a Tableau Report which I have created on Tableau Desktop using Spark SQL Connector (Simba). I am using Databricks for Spark Execution Engine. The same report when I am trying to publish and view on Tableau Server, it is giving Driver/Access Issue (screenshot attached). I have admin access to Server though and server in running as well.
Do I need to install additional drivers on Server to get the same connected ? Apologies for my limited knowledge of Tableau Server.
Screenshot of Error
There are few experiences i want to share for the above problem :
Driver(Simba) needs to be installed on Tableau Server as well as Desktop.
Once you install the Driver on the server, re-publish the report on Server which needs databricks connection otherwise you will keep getting the same error. Tableau server only reflects/read the report published after necessary driver is present on server.
Version of your Tableau Desktop should be either same or lower then version at Tableau Server otherwise report won't be published stating version mismatch error.
Feel free to share your feedback/experience.
We are using the maven plugin org.sonarsource.scanner.maven.sonar-maven-plugin to store our reports to our sonar qube instance. For company reasons our CI server is in a different network zone then this instance and postgres default port is closed. I wonder if there is an option to store the reports in a different way then having them written directly to database via jdbc, as opening ports is a tedious task here ;)
Furthermore, we also have some older pieces of software that need to be analyzed with a local sonar runner instance and the same question applies here (so if there is another way to store the reports)
Since version 5.2 there is no connection to the database from the scanners. So the easiest/safest course of action here should be to upgrade to the latest LTS (at time of writing 5.6 )
I am trying to create a "one click solution" with a Hadoop cluster, Ambari Server and a Talend via Apache Brooklyn in the cloud.
I can create all of the things, but now I have to connect them.
I am able to create "the project/connection" between the Ambari Server and the Talend manually. I have the url of the Ambari Server, so I can open the Talend and create the connection with the Hadoop cluster using the wizard of the Talend.
The question is, is there any way to do it without opening the Talend. I mean, creating manually the files that are needed and leave them into the corresponding folders.
In case of yes, which would be the files I need to create and what would be the content of this files?
I'm not familiar with Talend but a few Google searches, as well as this anwser suggest that Talend Open Studio does not come with a REST API.
As for a configuration file, I could not find any results. So my conclusion is that it is not possible to automatised.
When you think about it, it actually makes sense as the Talend Open Studio is mean to be a graphical and visualisation tool to build complex jobs.