How to publish data blended data sources? - tableau-api

I have a workbook with datasources blended and calculated fields referencing these datasources.
I get an error alert publishing to tableau server.
How do I publish the data sources?
I use tableau 2019

Create an extract of the blended data source, and publish that extract to the server. This extract will included all the calculated fields that are included in the current workbook.

Generally for this type of use case I suggest publishing your data extracts separately. They can contain your calculated fields. Publishing the calculated fields is actually more performant anyway, many of them become materialised.
You can connect to both server data sources in desktop, and perform the data blending within the Tableau workbook using the published data sources. You can create additional calculated fields, including those from data blends, within the workbook. You're then able to publish the workbook and those calculated fields will remain at the workbook level, not the data source.
Other users connecting to the data sources will only see the calculations published into the extract, not the calculations published with the workbook.
Hope that makes sense.

Related

Mapping Synapse data flow with parameterized dynamic source need importing projection dynamically

I am trying to build a cloud data warehouse where I have staged the on-prem tables as parquet files in data lake.
I implemented the metadata driven incremental load.
In the above data flow I am trying to implement merge query passing the table name as parameter so that the data flow dynamically locate respective parquet files for full data and incremental data and then go through some ETL steps to implement merge query.
The merge query is working fine. But I found that projection is not correct. As the source files are dynamic, I also want to "import projection" dynamically during the runtime. So that the same data flow can be used to implement merge query for any table.
In the picture, you see it is showing 104 columns (which is a static projection that it imported at the development time). Actually for this table it should be 38 columns.
Can I dynamically (i.e run-time) assign the projection? If so how?
Or anyone has any suggestion regarding this?
Thanking
Muntasir Joarder
Enable Schema drift in your source transformation when the metadata is often changed. This removes or adds columns in the run time.
The source projection displays what has been imported at the run time but it changes based on the source schema at run time.
Refer to this document for more details with examples.

How to copy Tableau Data Extract logic?

Someone in my org created a Data Extract. There is an issue in one of the worksheets that uses it, and we suspect it's due to a mistake in how the Union was built.
But since it's a Data Extract, I can't see the UI for the data merge. Is there anyway to take a current Data Extract and view the logic that creates it?
Download the extract from the server (I'm assuming you're using server), then open that extract using desktop. You should be able to see the details of it.
Before going too deep into extract details, note that extracts are not intended to be permanent systems of record for data - just an efficient way to work with query results for optimized reporting. So in general, you should always be able to throw away the extract and look at the original source - or recreate the extract on command. But life isn't always perfect so ...
If you use Tableau Desktop to look at your worksheet, and look at the data source icon at the top of the data pane in the left sidebar, do you see an icon for your data source that looks like two databases with one on top of (shadowing) the other? If so, you can at right click on the data source icon and view its properties to see the source database table or file path. You can then even try disabling the extract to view the original source data.
If instead you see a single database icon, you have a "naked" extract where you've discarded the reference to the original source, (unless it is stored in the catalog mentioned below.)
If your organization purchased the Data Management Add-on for Tableau Server (strongly recommended), then if your data source is published to Tableau Server you can trace its history and origin by exploring the Tableau Catalog. That is especially valuable if the extract was built by a Tableau Prep Flow.
If instead, someone built the extract another way, say by writing a custom app using the Tableau Data Extract API, then the answer is to find that program.
One last point, in recent versions of Tableau, extracts are stored in an efficient relational type database file called Hyper. Hyper extracts can either be a single table (say serializing the results of a query joining multiple tables) or a Hyper extract can contain multiple tables (say serializing caching individual tables and deferring the join for later).
That may not be relevant to your question, but could turn out to matter as you reverse engineer how the extract was created.

Changing Datasource to SQL Server in TABLEAU

We are changing our Datasource from MS Access to SQL Server. Question,
Will we need to redevelop the Worksheets and Dashboards
Is there a way to have a Worksheet to connect to the new Datasource?
The tables are same between MS Access the SQL Server.
Thanks
No you don't need to redevelop all the worksheets and dashboards, just change the datasource you use in tableau. Create your new data source, which hopefully has very similar field names and data types as your original data source. Then go to the Data menu and choose Replace Data Source). Tableau will change your existing worksheets to reference the new data source.
Once the previous is done go to any of your worksheets and fix any problems, usually you’ll see a few fields that were different for some reason. You can replace the references to the fields if necessary. (right click on any field dimensions or measure, replace references) And might need to do some other minor surgery, delete old fields repair a group or something.
It should be all good. When you’re sure you are done with the old data source, you can close it from the data menu.

Programmatically replace data table or data field in Tableau

In my company we have 1K+ Tableau workbooks, all using same Vertica data source via multiple-table connection or custom SQL. Often we end up in situation where reports stop working because underlying data source was changed: table renamed, field removed etc.
How can we proactively react to these changes is my question.
Can we try to correct source code of tableau workbooks to batch replace deprecated query parts?
Or can we monitor what data tables are used in the workbook with/without parsing the source code of the workbook to create alert system?
Thanks

How to prevent Microstrategy from pulling all data

We are building a dashboard with many reports. The relationship between tables is defined in microstrategy. We found that Microstrategy is not using different SQL for different reports. It is pulling all the data from Database(which is 46 million) and then applying post processing on those data to generate individual reports.
This is taking lot of time and it is not using the query engine of the database.
How can we configure microstrategy so that it generates different query for different reports and collect only the required data for a particular report and NOT all data.
One way to do that is to use fre form SQL. But we want to have the capability for drag and drop kind of reports.
How can we achieve this?
We are using Microstrategy 10.1
From your description it sounds like Microstrategy is first pulling all data (46 million records) from the DB using its SQL Engine and then applying filtering after this.
If your reports have been created in Microstrategy developer (or web) using attribute filters then each report should correctly execute sql that has explicit where conditions that translate to those attribute filters. e.g. if you have a report with an attribute titled 'Fruit' and you want to only display apples, then you would have an attribute filter on that report that only displays results where 'Fruit' = 'Apple'. This would translate to a where condition in the SQL engine when the report is executed. However, if you are applying a view filter to the report, then the SQL engine will first obtain everything and then filter the entire dataset in the analytical engine, which would be slow especially if there are multiple reports running on the dashboard.
It's important to know how you are bringing the dataset into the dashboard - is it using a cube as a dataset, or a report, or? There are a few ways of achieving the performance you are looking for, here are a couple:
Option 1: Develop each report in Microstrategy developer using attribute filters as desired. This would require that you have all your attribute relationships defined correctly.
Option 2: Have all your 46 million records pulled into a cube. Use the cube as the dataset for the dashboard and then use view filters however you want on the various reports you want to place on the report.
Option 1 + 2: You can combine both of the above options if you wish. Store entire dataset in cube, define several reports (normal reports, not cube reports) that can dynamically source from cube, using filters as required, and then add these reports into your dashboard.
These are the things I would do as first steps:
Check your attributes and attribute relationships are defined and work
Create a test report and try to filter based on one of these attributes
Try to create a few reports, each with different filter conditions based on one of the attributes
Put these reports into the dashboard and see whether each one generates different SQL statements.
This sounds like you have either:
built the reports using view filters (which apply filtering post query execution) rather than applying filter in the generated SQL, or
you don't have attribute relationships defined, such that the system doesn't think the filters you've defined aren't relevant to the fact tables containing the data.
Are you using cubes? I am assuming that what you mean by executing the query once.
You need to replace the the individual reports with new report- regular report- not the ones made out of cubes. Thats the only way