I have a tabular Cube deployed on server with IN-MEMORY. When i tried to change it to DIRECTQUERY mode i am getting error.
"Row Level Security is not supported in a database with DirectQuery property.
An error occurred when loading the DimensionPermission.
(Microsoft.AnalysisServices) "
I have connected cube by SSMS and changing to DirectQuery by Right Click on Cube -> Properties.
Please help to resolve this issue.
Regards
Rajnish
Which SSAS version are you using. If you are using SSAS 2014 or lower version, Row Level Security is not supported in a database with DirectQuery Model.
DirectQuery can be used if there is no column with calculation in tabular Model Cube.
If you are using SSAS 2016 or higher version, then you might need to check the restrictions for a database with DirectQuery model. See restrictions in below document:
https://learn.microsoft.com/en-us/sql/analysis-services/tabular-models/directquery-mode-ssas-tabular
Related
I have a simple test setup:
A SQL Server (2017) with one database, with one table
A SQL Server Analysis Server (2017, with compatibility level 1400)
I have created a simple tabular model in Visual Studio with one datasource (the database with one table) and one table
This is my power query:
let
Source = #"SQL/MYCOMPUTER\SQLDEV;SampleDatabase",
dbo_testTable = Source{[Schema="dbo",Item="testTable"]}[Data]
in
dbo_testTable
I have deployed this tabular model to my SSAS instance...
Now my question: if the table in my SQL Server is updated (added records), how can I see these updates reflected in the Tabular Model? Do I have to rerun the Tabular Model somehow?
I have tried "Process Table" in SSMS on the Tabular model table, but it does not get the new records...
Processing a table processes whichever dimension or fact table you selected and this will only read data from the database objects used by this table. What processing is actually performed will depend on the type of processing that you used. As far as the question in the answer you posted, Process Full on an entire Tabular model will remove all data from the deployed model, then reload everything and process the hierarchies and measures as well, so yes the new data from the underlying tables will now be in the model for all tables within it after you processed it using this option. There are multiple processing types that can either be done at the database, table, or partition level. You can view additional details on these via the Microsoft reference.
I have found that on the level of the Database in the SSAS instance, there is an option "Process Database" that has an option "Process Full", which does update all the underlying tables.
But maybe there is a better way to do this?
I am evaluating OrientDB as a replacement for MS SQL Server. One of the SQL Server tables I need to import into OrientDB contains time-series data with the value column using a SQL_VARIANT data type. I'm struggling to identify the best data type to use for the equivalent property in a new OrientDB vertex. I'm hesitant to convert it a STRING, but I don't see an equivalent variant type. Any recommendations?
OrientDB Teleporter is a tool that synchronizes a RDBMS to OrientDB database. You can use Teleporter to:
Import your existing RDBMS to OrientDB
Keep your OrientDB database synchronized with changes from the RDBMS. In this case the database on RDBMS remains the primary and the database on OrientDB a synchronized copy. Synchronization is one way, so all the changes in OrientDB database will not be propagated to the RDBMS
Teleporter is fully compatible with several RDBMS that have a JDBC driver: we successfully tested Teleporter with Oracle, SQLServer, MySQL, PostgreSQL and HyperSQL. Teleporter manages all the necessary type conversions between the different DBMSs and imports all your data as Graph in OrientDB.
NOTE: This feature is available both for the OrientDB Enterprise Edition and the OrientDB Community Edition. But beware: in community edition you can migrate your source relational database but you cannot enjoy the synchronize feature, only available in the enterprise edition.
How Teleporter works
Teleporter looks for the specific DBMS meta-data in order to perform a logical inference of the source DB schema for the building of a corresponding graph model. Eventually the data importing phase is performed.
Teleporter has a pluggable importing strategy. Two strategies are provided out of the box:
naive strategy, the simplest one
naive-aggregate strategy. It performs a "naive" import of the data source. The data source schema is translated semi-directly in a correspondent and coherent graph model using an aggregation policy on the junction tables of dimension equals to 2
To learn more about the two different execution strategies click here.
For more information: http://orientdb.com/docs/3.0.x/teleporter/Teleporter-Home.html
Hope it helps
Regards
We have developed a model in Tabular Object Model(TOM), <= 3.5 GB in size., and built few Tableau Dashboard(s) on top of this model.
Each dashboard is built by dragging multiple sheets into one dashboard. All the sheets (dragged in one dashboard) fetch data from one fact table (of course it has relationships with Date and other related dimensions) in TOM.
Now, when we interact with Tableau dashboard, we see a performance degradation. When we checked the SQL profiler, Tableau is generating a huge query for almost every interaction that we have with the dashboard.
We checked the huge query and observed that it includes the DAX/query for almost all the measures in fact tables, irrespective of whether the fact table is used in the said dashboard or not.
We have verified the filter settings in the dashboard, the settings are applicable only for the sheets dragged in our dashboard, so there is no question of visualizations getting changed in other dashboards.
Ironically, we still see that Tableau is creating a huge query and incorporating all the DAX/queries and this results into performance impact.
Is there any way we can restrict this behavior?
In case anyone else is having this issue, this is tied into Tableau not actually supporting SSAS Tabular, the connector you using is for SSAS Multidimensional so Tableau generates MDX queries against the DAX-based Tabular model.
This is also evident from Tableau's own techspecs site:
https://www.tableau.com/products/techspecs
"Microsoft SQL Server Analysis Services 2008 SP4 or later, multi-dimensional mode only*
"
Tableau's website at https://www.tableau.com/products/techspecs clearly states support for
"Microsoft SQL Server Analysis Services 2005 or later, non-tabular mode only*(incl. support for Kerberos)"
I am currently working with Tableau and SSAS Tabular. Can somebody please help shed some light on Tableau behaviour:
When selecting a measure from the model some dimensions are greyed out. These dimensions are however connected to the measure source table in the data model. Why does Tableau show them greyed out and what can I do to correct this behaviour?
We have found that Tableau is not able to connect correctly to the SSAS tabular model when it is deployed to SSAS 2016 with compatibility level 1200. After converting the model to compatibility leve 1103 (SSAS 2014) and still deploying it to SSAS 2016 Tableau is now able to connect to the SSAS tabular model correctly.
I want to connect two database and establish a relationship between them in tableau. One from sql sever and another from Microsoft excel sheet. How to do that?
I have goggled a lot for that but could not get a suitable answer.
You are speaking about Data Blending -
And for connecting cross database data
Cross Database Querying is a Flagship Upgrade to Tableau 10.0
However, you cannot use cross-database joins with these below connection types:
Tableau Server
Firebird
Google Analytics
Microsoft Analysis Services
Microsoft PowerPivot
Odata
Oracle Essbase
Salesforce
SAP BW
Splunk
Teradata OLAP Connector
You just need to connect to each database separately and make sure they have the same column names. When creating a sheet when you switch between datasources you will see a chain on the linked fields.
Do note that this is not properly joined but is just blended data, it would be best to create another table in your sql database for the excel sheet.