Datafactory : write get medata activity structure output to datalake - azure-data-factory

I am trying to write Generate Metadata to a file in datalake using copy file activity. But getting datatype issue when mapping structure filed to output schema
activity('Generate Metadata').output.structure
error:
Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to convert the value in 'value' property to 'System.String' type. Please make sure the payload structure and value are correct.,Source=Microsoft.DataTransfer.DataContracts,''Type=System.InvalidCastException,Message=Object must implement IConvertible.,Source=mscorlib

As the error shows, it needs String type. So you should cast your activity('Generate Metadata').output.structure to String. Use this expression to have a try: #string(activity('Generate Metadata').output.structure).

Related

Azure Data Factory - Capture error details of a dataflow activity -> Store into a variable -> Assign this variable to a dataflow parameter

I have a data flow and my requirement is to capture the error details into a variable when it fails and assign this variable to a parameter in the next data flow. I tried to achieve this until the second stage(With help) as below, but I'm unable to get this variable assigned to a parameter in the next data flow. The error I get is - Expression cannot be parsed
What do I do later?
This parameter is assigned to a column in the data flow and I use this column to update the table in the dedicated pool with the relevant error message.
I tried to reproduce the same in my environment and I got the same error
The above scenario fails, because dataflow fails to parse ' ' and / in your error message.
To resolve above error,please follow below steps:
I just create the error fail1 with message containing a different character.
Go to set variable : Create a variable and added dynamic content to the value.
#replace(replace(string(activity('Fail1').output.message),pipeline().parameters.quote,'"'),'\','/')
Output:
Updated:
Parameter

how to use json_extract_path_text?

I am facing an issue with JSON extract using JSON_EXTRACT_PATH_TEXT in redshift
I have two separate JSON columns
One containing the modems the customer is using and the other one containing the recharge details
{"Mnfcr": "Technicolor","Model_Name":"Technicolor ABC1243","Smart_Modem":"Y"}
For the above, I have no issue extracting the Model_name using JSON_EXTRACT_PATH_TEXT(COLUMN_NAME, 'Model_Name') as model_name
[{"Date":"2021-12-24 21:42:01","Amt":50.00}]
This one is causing me trouble. I used the same method above and it did not work. it gave me the below error
ERROR: JSON parsing error Detail: ----------------------------------------------- error: JSON parsing error code: 8001 context: invalid json object [{"Date":"2021-07-03 17:12:16","Amt":50.00
Can I please get assistance on how to extract this using the json_extract_path_text?
One other method I have found and it worked was to use regexp_substring.
This second string is a json array (square braces), not an object (curly brackets). The array contains a single element which is an object. So you need to extract the object from the array before using JSON_EXTRACT_PATH_TEXT().
The junction for this is JSON_EXTRACT_ARRAY_ELEMENT_TEXT().
Putting this all together we get:
JSON_EXTRACT_PATH_TEXT(
JSON_EXTRACT_ARRAY_ELEMENT_TEXT( <column>, 0)
, 'Amt')
you can use json_extract_path_text like below example
json_extract_path_text(json_columnName, json_keyName) = compareValue
for more you can refer this article
https://docs.aws.amazon.com/redshift/latest/dg/JSON_EXTRACT_PATH_TEXT.html

Dataset Empty parameter value

I have an xml dataset, I want to parametrize the compression type to treat .xml and .xml.gz files with the same pipeline :
When I put 'gzip' value in compression type it reads xml.gzip file. I want to know what value I should put to read uncompressed .xml file because it does not accept empty value. It is able to read xml file just when I delete the compression_type parameter
You should pass "None" and it should work out .
I feel "None" is more of a workaround in this particular case. "None" is still a string value, not empty.
In my scenario right now, I have an Excel dataset. I want to make every parameter as generic as possible, including the file path/name, sheet name, and the range. The value of "Range" under Connection tab allows empty value. However if I specify it as #dataset().DataRange and leave my parameter DataRange empty, I cannot preview the data or submit the pipeline because it complains that the value cannot be empty.

Save the Value of 'Set Variable Activity' in Json file

I have a scenario where I have value in Json format. I need to save this Value in Json file. I tried Copy activity by using additional column but I am getting this error "Failed to convert the value in 'value' property to 'System.String' type". Is there any other way to save the value in Json file?

Using [array] in a class based DSC resource

I have been creating a few class based custom DSC resources and I am running into the following issue:
Whenever I try to use an array as input for my resource, I get an error.
For example
DscProperty(Mandatory)]
[array]$Products
Would result in the following error when I try to create a MOF file using my resource:
Write-NodeMOFFile : Invalid MOF definition for node 'nodename':
Exceptioncalling "ValidateInstanceText" with "1" argument(s):
"Convert property 'Products' value from type 'STRING[]' to type 'INSTANCE' failed.
The input object for $Products would be (for example):
$Products = #("Windows server 2012", "Windows SQL Server", "Windows 8.1")
I honestly have no idea why the Write-NodeMOFFile function would try to convert the array (it should not need converting, right?) and even if it needed to be converted - why would it convert an array from STRING[] to INSTANCE?
Anyone has a clue as to why this happens?
The only way I got it to work was by creating a long string from my array of strings and then seperating them within the resource.
declare array like this :
[string[]]$product="ddd","dq","dqq"