Data factory lookup (dot) in the item() name - azure-data-factory

I am having lookup wherein salesforce query is there. I am using elements (item()) in subsequent activities. Till now i had item().name or item().email but now i have item().NVMStatsSF__Related_Lead__r.FirstName which has (dot) in the field name.
How should i parse it through body tag so that it reads it correctly?
So I have the following data in item()
{
"NVMStatsSF__Related_Lead__c": "00QE000egrtgrAK",
"NVMStatsSF__Agent__r.Name": "ABC",
"NVMStatsSF__Related_Lead__r.Email": "geggegg#gmail.com",
"NVMStatsSF__Related_Lead__r.FirstName": "ABC",
"NVMStatsSF__Related_Lead__r.OwnerId": "0025434535IIAW"
}
now when i use item().NVMStatsSF__Agent__r.Name it will not parse because of (dot) after NVMStatsSF__Agent__r. And it is giving me the following error.
'item().NVMStatsSF__Related_Lead__r.Email' cannot be evaluated because property 'NVMStatsSF__Related_Lead__r' doesn't exist, available properties are 'NVMStatsSF__Related_Lead__c, NVMStatsSF__Agent__r.Name, NVMStatsSF__Related_Lead__r.Email, NVMStatsSF__Related_Lead__r.FirstName, NVMStatsSF__Related_Lead__r.OwnerId'.",
"failureType": "UserError",
"target": "WebActivityToAddPerson"

this is because ADF uses '.' for object reading.
Could you find a way to rename the field name which contains '.'?

Seems like you need a built-in function to get the value of an object according to the key. Like getValue(item(), 'key.nestkey'). But unfortunately, seems there isn't such a function. You may need handle your key first.

Finally, it worked. I was being silly.
Instead of taking the value from the child table with the help of (dot) operator I just used subquery. Silly see.
And it worked.

Related

How to extract the value from a json object in Azure Data Factory

I have my ADF pipeline, Where my final output from set variable activity is something like this {name:test, value:1234},
The input coming to this variable is
{
"variableName": "test",
"value": "test:1234"
}
The expression provided in Set variable Item column is #item().ColumnName. And the ColumnName in my JSon file is something like this "ColumnName":"test:1234"
How can I change it so that I get only 1234. I am only interested in the value coming here.
It looks like you need to split the value by colon which you can do using Azure Data Factory (ADF) expressions and functions: the split function, which splits a string into an array and the last function to get the last item from the array. This works quite neatly in this case:
#last(split(variables('varWorking'), ':'))
Sample results:
Change the variable name to suit your case. You can also use string methods like lastIndexOf to locate the colon, and grab the rest of the string from there. A sample expression would be something like this:
#substring(variables('varWorking'),add(indexof(variables('varWorking'), ':'),1),4)
It's a bit more complicated but may work for you, depending on the requirement.
It seems like you are using it inside of an iterator since you got item but however, I tried with a simple json lookup value
#last(split(activity('Lookup').output.value[0].ColumnName,':'))

Add rows in smartsheets using python

How do I take a list of values, iterate through it to create the needed objects then pass that "list" of objects to the API to create multiple rows?
I have been successful in adding a new row with a value using the API example. In that example, two objects are created.
row_a = ss_client.models.Row()
row_b = ss_client.models.Row()
These two objects are passed in the add row function. (Forgive me if I use the wrong terms. Still new to this)
response = ss_client.Sheets.add_rows(
2331373580117892, # sheet_id
[row_a, row_b])
I have not been successful in passing an unknown amount of objects with something like this.
newRowsToCreate = []
for row in new_rows:
rowObject = ss.models.Row()
rowObject.cells.append({
'column_id': PM_columns['Row ID Master'],
'value': row
})
newRowsToCreate.append(rowObject)
# Add rows to sheet
response = ss.Sheets.add_rows(
OH_MkrSheetId, # sheet_id
newRowsToCreate)
This returns this error:
{"code": 1062, "errorCode": 1062, "message": "Invalid row location: You must
use at least 1 location specifier.",
Thank you for any help.
From the error message, it looks like you're missing the location specification for the new rows.
Each row object that you create needs to have a location value set. For example, if you want your new rows to be added to the bottom of your sheet, then you would add this attribute to your rowObject.
rowObject.toBottom=True
You can read about this location specific attribute and how it relates to the Python SDK here.
To be 100% precise here I had to set the attribute differently to make it work:
rowObject.to_bottom = True
I've found the name of the property below:
https://smartsheet-platform.github.io/smartsheet-python-sdk/smartsheet.models.html#module-smartsheet.models.row
To be 100% precise here I had to set the attribute differently to make it work:
Yep, the documentation isn't super clear about this other than in the examples, but the API uses camelCase in Javascript, but the same terms are always in snake_case in the Python API (which is, after all, the Pythonic way to do it!)

Parse setting explicit type using REST

I know you can set a Date field explicitly like so:
"date_brewed":{
"__type":"Date",
"iso":"2009-10-15T00:00:00.000Z"
}
But is there anyway to explicitly set the column type of 'Number' using REST? For instance, I'd like to set the column 'batch_size' to a Number instead of a string but when POST'ing via rest it keeps getting created as a string type column.
Meh, this was more of a Perl issue than a Parse issue.
What I had to do to tell Perl to treat the number like an actual number was to add a zero to the value. :/

Elasticsearch mongodb river script in index doesn't work

I'm trying to change few fields strings using javascript.
For example take only the last part of the URL taken from mongo through the river so in elasticsearch I'll have only the end of it.
When creating the index (using curl) I added under "options" the following script:
"script": "ctx.document.shorturl = ctx.document.url.substr(-4);delete ctx.document.url;
I tried some manipulations such as adding \"...\" or use ctx['doc']['url'] and others but nothing seems to work.
I always get only url field with the full url (shorturl is not created at all).
Can anyone suggest what is the right syntax to make it work?
Another thing I need to do is combine to fields - lat & long, to one "location" field in order to use it in Kibana, can anyone suggest the right script for that? (create new field called "location" which contain both field "lat" & "long" with comma between them).
Thanks.
You did substring(-4), hence it will return the whole string. You should use substring(4) instead:
ctx.document.shorturl = ctx.document.url.substr(4);delete ctx.document.url;

Check for list of String in DataAnnotation

I need to check whether the Property contains one of the or all following strings
"C-I", "C-II", "C-III", "C-IV", "C-V"
if not it Errormessage must be
"Invalid Property. Must be blank or C-I, C-II, C-III, C-IV, or C-V.",
i don know which "DataAnnotation Attribute" to use and How? if possible please provide sample.
You could use the Regular Expression data annotation. However, I would recommend implementing IValidatableObject on your data class. You can then write your custom logic within the Validate method. This way, if/when those valid options change, you would just be modifying a collection, rather then trying to figure out a new valid regex statement.
It can be done using anyone of the follwing Attributes
**
1.EnumDataTypeAttribute
2.CustomValidationAttribute
3. Creating New Custom Attribute.
**