Dynamically Refresh Schema in SSAS tabular cube - ssas-tabular

Is there any command to refresh cube schema (Structure) every day before processing the SSAS tabular cube?
Requirement: Few column names in an object keeps changing(renaming) so, if we rename those columns in DB view, it should automatically reflect in Cube after overnight processing.
I tried adding ‘select * from an Object’ in cube table properties and processed cube.
Later, I tried to rename column name in view then processed cube, it failed due to different column name.
Is it possible to dynamically refresh schema without manually changing structure via solution and re-deploy?
Please suggest

This answer is not going to solve your issue, however best practice would be NOT to handle these changes in the SSAS Data Model. This should be a consistent layer for your users. I would handle these changes in the data base or source system end, maybe checking if a column exists, and if not adding a dummy row or some other process.

Related

How to configure a Synapse Mapping Data Flow for INSERT/UPDATE/DELETE when the destination table does not yet exist

I am trying to build a generic Mapping Data Flow for some basic cleansing on tables in my Data Lake. I need it to be able to work both on an ongoing basis after data already exists in my cleansed tables as well as when new tables are added (it would detect them automatically and create and populate the destination). Both the Source and Destination tables with be Delta tables.
The approach I have taken is to have Sources configured to both my actual source and to the target and use either JOIN transformations or EXISTS transformations to identify the new, updated and removed rows.
This works fine for INSERTS and UPDATES, however my issues is dealing with DELETES when there is no data currently in the destination. Obviously there will be nothing to DELETE - that is as expected. However, because I reference the key column that will exist once data is loaded to the table I get an error on an initial run that states:
ERROR Dataflow AppManager: name=BatchJobListener.failed, opId=xxx, message=Job 'xxx failed due to reason: DF-SINK-007 at Sink 'cleansedTableWithDeletes': Sink results in 0 output columns. Please ensure at least one column is mapped.
The overall process looks as follows:
Has anyone developed a pattern that works for a generic flow (this one is parameter driven and ensures schema drift is accommodated) or a way for the Data Flow to think that there IS a column in the destination that it can refer to and get past this issue?
In Source options check Allow no files found.
You can also provide date dynamically in Filter by last modified option.
Refer - https://learn.microsoft.com/en-us/azure/data-factory/data-flow-sink#sink-settings

How to call a sas dataset by its label or where to check its name

I have a problem in dealing with SAS Enterprise Guide that runs on the server of my client.
I do not have access to the libraries so, in order to use the datasets the only thing we can do is to store them on the local disk C: of the computer and drag them to SAS.
We can not create libraries because the server does not read local paths.
Once you drag a table, let's call it "mydata" in SAS, the table is automatically renamed "mydata9865" with random numbers at the end and "mydata" is its label.
If you right-click the table and go to properties, you can't find the name of the table, just the label.
The only way I found to check the real name of the dataset is to open the Query Builder and check the name in the code preview.
The problem is that I am dealing with tables of millions of records and the machine I am using is very slow, so whenever I want to open the Query Building, just to check the table's name, it takes sometimes even an hour.
I am not a SAS expert, so I am sure there is a smarter way to do so. Is it possible for instance to use the table by calling it with its label?
data mydata2;
set mydata;
run;
instead of
set mydata9865?
Or is there some place I can rapidly check the name of the table without going through the query builder?
I tried to google it but I can't find anything, I hope someone will be able to help me!
Thank you in advance
Hover the mouse pointer over a data node to see it's attributes. The data set name is the File name: value.
For example:
In this example I had renamed the nodes created by two different queries to be the same (doable:yes, smart:maybe not). NOTE: A data node Label: is not necessarily the same as it's underlying data set's label metadata.
Regarding
use the table by calling it with its label?
Two nodes can have the same label, and is a a situation that defeats this approach.
Use the COPY task to upload your data explicitly. It sounds like you're not adding your data to the projects properly so SAS automatically assigns a name, rather than if you explicitly import or load your data.
Problem solved! I should have simply upload the data to the server with Tasks->Data->Upload Data Sets to Server but I didn't know this task so I didn't know it was possible to do it at all!
https://communities.sas.com/t5/SAS-Enterprise-Guide/Importing-sas-data-sets-from-C-drive-into-SAS-EG-not-possible/td-p/135184
Thank you everybody for you help!

How to update table using schema compare while data is in table?

Using Visual Studio Enterprise 2015
I would like to use schema compare to update a bunch of table changes from a test environment to my local one.
I'm getting the error:
Rows were detected. The schema update is terminating because data loss
might occur.
So this is saying I have data in the tables I want to update and I could lose data if I made the table changes. But I'm going to do a data compare afterward the get the updated data as well. How can I override the above error and force the changes? Or do I have to just truncate the tables with data in them first?
Thank you in advance
I found the answer in the settings.
Click on the options Cog wheel that's next to the compare and update buttons.
Next click on the General tab and then uncheck "Block on possible data loss"
Hope this helps someone in the future.

Is it possible to prevent the SQL Producer from overwriting just one of the tables columns?

Scenario: A computed property needs to available for RAW methods. The IsComputed property set in the model will not work as its value will not be available to RAW methods.
Attempted Solution: Create a computed column directly on the SQL table as opposed to setting the IsComputed property in the model. Specify that CodefluentEntities not overwrite the computed column. I would than expect the BOM to read the computed SQL field no differently than if it was a normal database field.
Problem: I can't figure out how to prevent Codefluent Entities from overwriting the computed column. I attempted to use the production flags as well as setting produce="false" for the property in the .cfp. Neither worked.
Question: Is it possible to prevent Codefluent Entities from overwriting my computed column and if so, how?
The solution youre looking for is here
You can execute whatever custom T-SQL scripts you like, the only premise is to give the script a specific name so the Producer knows when to execute it.
i.e. if you want your custom script to execute after the tables are generated, name your script
after_[ProjectName]_tables.
Save your custom t-sql file alongside the codefluent generated files and build the project.
In my specific case, i had to enable full-text index in one of my table columns, i wrote the SQL script for the functionality, saved it as
`after_[ProjectName]_relations_add`
Heres how they look in my file directory
file directory
Alternate Solution: An alternate solution is to execute the following the TSQL script after the SQL Producer finishes generating.
ALTER TABLE PunchCard DROP COLUMN PunchCard_CompanyCodeCalculated
GO
ALTER TABLE PunchCard
ADD PunchCard_CompanyCodeCalculated AS CASE
WHEN PunchCard_CompanyCodeAdjusted IS NOT NULL THEN PunchCard_CompanyCodeAdjusted
ELSE PunchCard_CompanyCode
END
GO
Additional Configuration Needed to Make Solution Work: In order for this solution to work one must also configure the BOM so that it does not attempt to save the data associated with the computed columns. This can be done through Model using the advanced properties. In my case I selected the CompanyCodeCalculated property. Went to advanced settings. And set the Save setting to False.
Question: Somewhere in the Knowledge Center there is a passing reference on how to automate the execution SQL Scripts after the SQL Producer finishes but I can not find it. Anybody now how this is done?
Post Usage Comments: Just wanted to let people know I implemented this approach and am so far happy with the results.

how to figure out which columns in the fact table are used for calculating measures in an OLAP cube?

I have to verify that olap cube data and the data from relational tables from where a cube is built is correct.
And I will do so by writing the TSQL queries and compare the values with that of cube.
But, I got stuck in the course of determining which columns are used for measure. How do I figure out which columns are used for measures?
Help appreciated!
You need to look at the cube metadata.
For SSAS2005, take a look at the DSV (data source view) and mappings to dim and fact table values behind the scenes. This should allow you to see what is going on. If you don't have a project you can reverse engineer it using the 'import anslysis services template' (or some such) option from the new project dialog in BIDS.
Calcuated measures are defined in the cube script. If you have a reverse-engineered cube or cube project you can open the cube and see this in the 'calculations' tab.
For AS2000 you can open the cubes on the server (assuming sufficient permissions) and look at the mappings there. There is a tool called OLAPScribe that will help you do this for AS2000. Alternatively you can run a trace on the source database and capture the SQL generated by the cube as it is processed.