SSAS tabular model - processing history - ssas-tabular

I'm looking for a clever solution to store information about the tabular model model processing history. I tried to use extended events tracking for that but don't know how to find a model name in that logs.
Any ideas would be really helpful.
Thx in advance !

You can query the $SYSTEM.MDSCHEMA_CUBES DMV and check the LAST_DATA_UPATE column to find when the Tabular model was last processed. This only returns the results for the model that you're in so filtering the model isn't necessary. If you're looking to use XMLA, you execute the example request below as an XMLA query in SSMS. Like querying the prior mentioned DMV directly, this will run in the context of the model your connected to.
<Discover xmlns="urn:schemas-microsoft-com:xml-analysis">
<RequestType>MDSCHEMA_CUBES</RequestType>
<Restrictions />
<Properties>
<PropertyList>
<Catalog>YourTabularModelName</Catalog>
</PropertyList>
</Properties>
</Discover>

Related

Automatically map contents of REST JSON body as flat table in Data Flow

With the Copy Data transformation it is possible to retrieve data from a REST call (array with flat json objects, similar to Odata) and copy the contents to a flat table keeping the data types from the source but without the necessity to set the schema for that specific data.
When I try to recreate this with Data Flow, I can't get this to work. When I check the Data Preview of my Source I get a hierarchy with a body (with my odata like data) and a header. And if I send that to my sink (Avro) it will be saved in this same hierarchical structure (including the header). I know I can fix this manually by using a Select operation (body.column1, body.column2, etc.), but I want to make my Data Flow dynamic so I'm able to use it with multiple tables/endpoints.
So I receive it like this with my REST source:
link
And I want it to be like this at my Sink without hardcoding my schema:
link
The only work around I can come up with is retrieving the data using Copy Data, put it somewhere temporarily and then use my data flow to further transform the data. Is there a more easy way to do this? I cannot imagine that I'm the only one that has this issue.
Hopefully it's clear and somebody is able to help. Thank you very much in advance.
Data flow projection will get schema from API including body and header. Hence, when you use auto mapping everything going to be saved.
Below work arounds you can think of,
As you mentioned, using copy data first and then data flow to further transform.
Use select or derived column transformations and transform your data to get all column names and then finally use sink. For this you can opt with Column pattern matching syntax. So that one condition can be meet with multiple columns to transform.
Check below link to know about column pattern mappings.
https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-column-pattern

Odata query in power bi fails on computed column

I've been following the tutorial here:
Tutorial
I can get the odata uri parsed together just fine and get a json repsonse from azure devops that looks exactly like I expect. However when I take that same uri and use it as the odata source in Power Bi, I get the error:
Details: "OData: The property 'PartiallySuccessfulRate' does not exist on type 'Microsoft.VisualStudio.Services.Analytics.Model.PipelineRun'. Make sure to only use property names that are defined by the type or mark the type as open type."
If I remove them, the query works fine in powerbi.
Is there a way to make powerbi accept the computed columns? Or do I have to do the calculation in powerbi?
I would rather to these small calculations in power bi. I use most of the time Odata query for Dynamics as well. My main purpose of Oata query is to fetch only required data and not like millions of records.
Once this purpose is solved, I let powerbi do some calculations for me.
In this way it is easier for my Team to collaborate as well so that they can update/change as easily as they can.

Success Factors Status history of a Job Application

I'm extracting data from SAP Success Factors via OData API.
And I need the status history of a JobApplication entity, i.e. how and when the application's status had been changed.
Unfortunately I can not find any documentation about it and it looks like I can not extract this data.
Do you know if there is such information and how I can extract it?
My second option is to extract the data from the Integration Centre. Does it provide such information ?
Thanks
you could use Entity AuditTrail.
In your case entity JobApplicationStatusAuditTrail

How to control data failures in Azure Data Factory Pipelines?

I receive an error from time and time due to incompatible data in my source data set compared to my target data set. I would like to control the action that the pipeline determines based on error types, maybe output or drop those particulate rows, yet completing everything else. Is that possible? Furthermore, is it possible to get a hold of the actual failing line(s) from Data Factory without accessing and searching in the actual source data set in some simple way?
Copy activity encountered a user error at Sink side: ErrorCode=UserErrorInvalidDataValue,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column 'Timestamp' contains an invalid value '11667'. Cannot convert '11667' to type 'DateTimeOffset'.,Source=Microsoft.DataTransfer.Common,''Type=System.FormatException,Message=String was not recognized as a valid DateTime.,Source=mscorlib,'.
Thanks
I think you've hit a fairly common problem and limitation within ADF. Although the datasets you define with your JSON allow ADF to understand the structure of the data, that is all, just the structure, the orchestration tool can't do anything to transform or manipulate the data as part of the activity processing.
To answer your question directly, it's certainly possible. But you need to break out the C# and use ADF's extensibility functionality to deal with your bad rows before passing it to the final destination.
I suggest you expand your data factory to include a custom activity where you can build some lower level cleaning processes to divert the bad rows as described.
This is an approach we often take as not all data is perfect (I wish) and ETL or ELT doesn't work. I prefer the acronym ECLT. Where the 'C' stands for clean. Or cleanse, prepare etc. This certainly applies to ADF because this service doesn't have its own compute or SSIS style data flow engine.
So...
In terms of how to do this. First I recommend you check out this blog post on creating ADF custom activities. Link:
https://www.purplefrogsystems.com/paul/2016/11/creating-azure-data-factory-custom-activities/
Then within your C# class inherited from IDotNetActivity do something like the below.
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices,
IEnumerable<Dataset> datasets,
Activity activity,
IActivityLogger logger)
{
//etc
using (StreamReader vReader = new StreamReader(YourSource))
{
using (StreamWriter vWriter = new StreamWriter(YourDestination))
{
while (!vReader.EndOfStream)
{
//data transform logic, if bad row etc
}
}
}
}
You get the idea. Build your own SSIS data flow!
Then write out your clean row as an output dataset, which can be the input for your next ADF activity. Either with multiple pipelines, or as chained activities within a single pipeline.
This is the only way you will get ADF to deal with your bad data in the current service offerings.
Hope this helps

Breeze column based security

I have a "web forms", "database first enitity" project using Breeze. I have a "People" table that include sensitive data (e.g. SSN#). At the moment I have an IQueryable web api for GetPeople.
The current page I'm working on is a "Manage people" screen, but it is not meant for editing or viewing of SSN#'s. I think I know how to use the BeforeSaveEntity to make sure that the user won't be able to save SSN changes, but is there any way to not pass the SSN#s to the client?
Note: I'd prefer to use only one EDMX file. Right now the only way I can see to accomplish this is to have a "View" in the database for each set of data I want to pass to the client that is not an exact match of the table.
You can also use JSON.NET serialization attributes to suppress serialization of the SSN from the server to the client. See the JSON.NET documention on serialization attributes.
Separate your tables. (For now, this is the only solution that comes to mind.)
Put your SSN data in another table with a related key (1 to 1 relation) and the problem will be solved. (Just handle your save in case you need it.)
If you are using Breeze it will work, because you have almost no control on Breeze API interaction after the user logs in, so it is safer to separate your data. (Breeze is usually great, but in this case it's harmful.)