Good Morning,
Objective: I am working on trying to add new columns to a SSAS Tabular Model table. With a long-term aim to programmaticly made large-batch changes when needed.
Resources I've found:
https://learn.microsoft.com/en-us/sql/analysis-services/tabular-models-scripting-language-commands/create-command-tmsl
This one gives the template I've been following but seems to not work.
What I have tried so far:
{
"create": {
"parentObject": {
"database": "TabularModel_1_dev"
, "table": "TableABC"
},
"columns": [
{
"name": "New Column"
, "dataType": "string"
, "sourceColumn": "Column from SQL Source"
}
]
}
}
This first one is the most true to the example but returns the following error:
"The JSON DDL request failed with the following error: Unrecognized JSON property: columns. Check path 'create.columns', line 7, position 15.."
Attempt Two:
{
"create": {
"parentObject": {
"database": "TabularModel_1_dev"
, "table": "TableABC"
},
"table": {
"name": "Item Details by Branch",
"columns": [
{
"name": "New Column"
, "dataType": "string"
, "sourceColumn": "New Column"
}
]
}
}
}
Adding table within the child list returns error too;
"...Cannot execute the Create command: the specified parent object cannot have a child object of type Table.."
Omitting the table within the parentObject is unsuccessful as well.
I know it's been three years since the post, but I too was attempting the same thing and stumbled across this post in my quest. I ended up reaching out to microsoft and was told that the Add Column example they gave in their documentation was a "doc bug". In fact, you can't add just a column, you have to feed it an entire table definition via createOrReplace.
SSAS Error Creating Column with TMSL
Related
I'm building recommender system using AWS Personalize. User-personalization recipe has 3 dataset inputs: interactions, user_metadata and item_metadata. I am having trouble importing user metadata which contains boolean field.
I created the following schema:
user_schema = {
"type": "record",
"name": "Users",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "type",
"type": [
"null",
"string"
],
"categorical": True
},
{
"name": "lang",
"type": [
"null",
"string"
],
"categorical": True
},
{
"name": "is_active",
"type": "boolean"
}
],
"version": "1.0"
}
dataset csv file content looks like:
USER_ID,type,lang,is_active
1234#gmail.com ,,geo,True
01027061015#mail.ru ,facebook,eng,True
03dadahda#gmail.com ,facebook,geo,True
040168fadw#gmail.com ,facebook,geo,False
I uploaded given csv file on s3 bucket.
When I am trying create dataset import job it gives me the following exception:
InvalidInputException: An error occurred (InvalidInputException) when calling the CreateDatasetImportJob operation: Input csv has rows that do not conform to the dataset schema. Please ensure all required data fields are present and that they are of the type specified in the schema.
I tested and it works without boolean field is_active. There are no NaN values in given column!
It'd be nice to have an ability to directly test if your pandas dataframe or csv file conforms given schema and possibly get more detailed error message.
Does anybody know how to format boolean field to fix that issue?
I found a solution through many trials. Checked the AWS Personalization documentation (https://docs.aws.amazon.com/personalize/latest/dg/how-it-works-dataset-schema.html#dataset-requirements) which says that: boolean (values true and false must be lower case in your data).
Then I tried several things to find a solution, and one of them really worked. But still the hard way to find a solution and spent hours.
Solution:
Convert column in pandas DataFrame into string (Object) format.
lowercase True and False string values to get true and false.
store pandas DataFrame as csv file
it results in lowercase values of true and false.
USER_ID,type,lang,is_active
1234#gmail.com ,,geo,true
01027061015#mail.ru ,facebook,eng,true
03dadahda#gmail.com ,facebook,geo,true
040168fadw#gmail.com ,facebook,geo,false
That's all! There is no need to change "boolean" type in schema to "string"!
Hopefully they'll solve that issue soon since I contacted with AWS technical support with the same issue.
I am having an issue with the inline data set for Common Data Model in Azure Data Factory.
Simply, everything in ADF appears to connect and read from my manifest file and entity definition - but when I click the "Data preview" button, I always get "No output data" - which I find bizarre, as the data can be read perfectly when using the CDM connector to the same files in PowerBI. What am I doing wrong to mean that the data is not read into the data preview and subsequent transformations in the mapping data flow?
My Manifest file looks as below (referring to an example entity):
{
"$schema": "CdmManifest.cdm.json",
"jsonSchemaSemanticVersion": "1.0.0",
"imports": [
{
"corpusPath": "cdm:/foundations.cdm.json"
}
],
"manifestName": "manifestExample",
"explanation": "example",
"entities": [
{
"type": "LocalEntity",
"entityName": "Entityname",
"entityPath": "folder/EntityName.cdm.json/Entityname",
"dataPartitions": [
{
"location": "folder/data/Entityname/Entityname.csv",
"exhibitsTraits": [
{
"traitReference": "is.partition.format.CSV",
"arguments": [
{
"name": "columnHeaders",
"value": "true"
},
{
"name": "delimiter",
"value": ","
}
]
}
]
}
]
},
...
I am having exactly same output message "No output data". I am using json not manifest. If i sink the source it moves no data but without error. My CDM originates from PowerBI dataflow. The PowerApps works fine but historization and privileges make it useless.
Edit:
On Microsofts info on preview feature we can find this
screen. I will make a guess that CDM the ADS sources is not the same which orignates from Power BI.
I am accessing a RESTful API that pages results in groups of 50 using the HTTP connector. The REST connector doesn't seem to support Client Certificates so I can't use the pagination in that.
I have a Pipeline Variable called SkipIndex that defaults to 0. Inside the Until loop I have a Copy Data Activity that works (HTTP source to BLOB sink), then a Set Variable Activity that I am trying to get to increment this Variable.
{
"name": "Add 50 to SkipIndex",
"type": "SetVariable",
"dependsOn": [
{
"activity": "Copy next to temp",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "SkipIndex",
"value": {
"value": "50++",
"type": "Expression"
}
}
}
Everything I have tried results in errors such as "The expression contains self referencing variable. A variable cannot reference itself in the expression." and the one above with 50++ causes a sink error during debug.
How can I get the Until loop to increment this variable after it retrieves data?
Agree that REST Connector does supports pagination but does not for Client Certificates Authentication type.
For the idea of your Until activity scenario,i am tripped by the can't self-reference a variable in an expression limitation also. Maybe you could make a little trick on that: Add one more variable to persist the index number.
For example,i got 2 variables: count and indexValue
Until Activity:
Inside Until Activity:
V1:
V2:
BTW, no usage of 50++ in ADF.
I have created a trivia game using the SDK, it takes user input and then compares it to a value in my DB to see if its correct.
At the moment, I am just passing a raw input variable through my conversation, this means that it regularly fails when it mishears the user since the exact string which was picked up is rarely == to the value in the DB.
Specifically I would like it to only pick up numbers, and for example realise that it must extract '10' , from a speech input of 'my answer is 10'.
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "welcome"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"description": "response",
"name": "Raw input",
"fulfillment": {
"conversationName": "rawInput"
},
"intent": {
"name": "raw.input",
"parameters": [{
"name": "number",
"type": "org.schema.type.Number"
}],
"trigger": {
"queryPatterns":[
"$org.schema.type.Number:number is the answer",
"$org.schema.type.Number:number",
"My answer is $org.schema.type.Number:number"
]
}
}
}
],
"conversations": {
"welcome": {
"name": "welcome",
"url": "https://us-central1-triviagame",
"fulfillmentApiVersion": 2
},
"rawInput": {
"name": "rawInput",
"url": "https://us-central1-triviagame",
"fulfillmentApiVersion": 2
}
}
}
app.intent('actions.intent.MAIN', (conv) => {
conv.data.answers = answersArr;
conv.data.questions = questionsArr;
conv.data.counter = answersArr.length;
var thisQuestion = conv.data.questions;
conv.ask((conv.data.answers)[0]));
});
app.intent('raw.input', (conv, input) => {
if(input == ((conv.data.answers)[0])){
conv.ask(nextQuestion());
}
app.intent('actions.intent.TEXT', (conv,input) => {
//verifying if input and db value are equal
// at the moment input is equal to 'my number is 10' (for example) instead of '10'
//therefore the string verification never works
conv.ask(nextQuestion());
});
In a previous project i used the dialogflow UI and I used this #system.entities number parameter along with creating some training phrases so it understands different speech patterns.
This input parameter I am passing through my conv , is only a raw string where I'd like it to be filtered using some sort of entity schema.
How do I create the same effect of training phrases/entities using the JSON file?
You can't do this using just the Action SDK. You need a Natural Language Processing system (such as Dialogflow) to handle this as well. The Action SDK, by itself, will do speech-to-text, and will use the actions.json configuration to help shape how to interpret the text. But it will only return the entire text from the user - it will not try to determine how it might match an Intent, nor what parameters may exist in it.
To do that, you need an NLP/NLU system. You don't need to use Dialogflow, but you will need something that does the parsing. Trying to do it with simple pattern matching or regular expressions will lead to nightmares - find a good system to do it.
If you want to stick to things you can edit yourself, Dialogflow does allow you to download its configuration files (they're just JSON), edit them, and update or replace the configuration through the UI or an API.
We have created a complextype field "carriers" which is an array of Carrier objects. See below metadata
"dataProperties": [
{
"name": "carriers",
"complexTypeName":"Carrier#Test",
"isScalar":false
}]
The Carrier entity is defined as below:
{
"shortName": "Carrier",
"namespace": "Test",
"isComplexType": true,
"dataProperties": [
{
"name": "Testing",
"isScalar":true,
"dataType": "String"
}
]
}
We are trying to return an array of complextype in breeze from a REST service call. We get an error in breeze.debug.js in the method proto._updateTargetFromRaw. The error is because the datatype is null.
Any idea how to fix this issue?
I'm guessing the problem is in your "complexTypeName". You wrote "Carrier#Test" when I think you meant to write "Carrier:#Test". The ":#" combination separates the "short name" from the namespace; you omitted the colon.
Hope that's the explanation.