Mapping Data Flow Common Data Model source connector datetime/timestamp columns nullified? - azure-data-factory

We are using Azure Data Factory Mapping data flow to read from Common Data Model (model.json).
We use dynamic pattern – where Entity is parameterised and we do not project any columns and we have selected Allow schema drift.
Problem: We are having issue with “Source” in mapping data flow (Source Type is Common Data Model). All the datetime/timestamp columns are read as null in source activity.
We also tried in projection tab Infer drifted column types where we provide a format for timestamp columns, However, it satisfies only certain timestamp columns - since in the source each datetime column has different timestamp format.
11/20/2020 12:45:01 PM
2020-11-20T03:18:45Z
2018-01-03T07:24:20.0000000+00:00
Question: How to prevent datetime columns becoming null? Ideally, we do not want Mapping Data Flow to typecast any columns - is there a way to just read all columns as string?
Some screenshots
In Projection tab - we do not specify schema - to allow schema drift and to dynamically load more than 1 entities.
In Data Preview tab
ModifiedOn, SinkCreatedOn, SinkModifiedOn - all these are system columns and will definitely have values in it.

This is now resolved on a separate conversation with Azure Data Factory team.
Firstly there is no way to 'stringfy' all the columns in Source, because CDM connector gets its metadata from model.json (if needed this file can be manipulated, however not ideal for my scenario).
To solve datetime/timestamp columns becoming null - under Projection tab we need to select Infer drifted column types and then you can add "multiple" time formats that you expect to come from CDM. You can either select from dropdown - if your particular datetime format is not listed in the dropdown (which was my case) then you can edit the code behind the data flow (i.e. data flow script), to add your format (see second screenshot).

Related

How to map Data Flow parameters to Sink SQL Table

I need to store/map one or more data flow parameters to my Sink (Azure SQL Table).
I can fetch other data from a REST Api and is able to map these to my Sink columns (see below). I also need to generate some UUID's as key fields and add these to the same table.
I would like my EmployeeId column to contain my Data Flow Input parameter, e.g. named param_test. In addition to this I need to insert UUID's to other columns which are not part of my REST input fields.
How to I acccomplish that?
You need to use a derived column transformation, and there edit the expression to include the parameters.
derived column transformation
expression builder
Adding to #Chen Hirsh, use the same derived column to get uuid values to the columns after REST API Source.
They will come into sink mapping:
Output:

how i will map data in data factory source sqlwh destination blob

my source is SQLDB
SINK :BLOB
SQL table have columns
in target file which i have creating blob initially no Header right. so customer given some Predefined Names so that data from sql column sholud be mapped those fileds.
in copy activity at mapping i need to map WITH proper data type and name which customer given
defaut its coming but i need ti map as i stated
HoW will i resolve it can some one help me
You can simply edit the sink header names, since its a TSV anyways
For addressing DataType mapping,
See, Data type mapping
Currently such data type conversion is supported when copying between
tabular data. Hierarchical sources/sinks are not supported, which
means there is no system-defined data type conversion between source
and sink interim types.

Creating decision tables in Red Hat Decision Central not reflecting complex types / structures

I have a DMN decision created in Decision Manager 7.3. I have a few data types created, all of which are "structures" (i.e. complex types) with nested fields. I have created a decision table of which the condition column is bound to one of these structures (Customer) and the output column is bound to a Result structure.
However, I would expect the column headers to reflect the structure of the objects as per the example here (step 9 onwards): https://access.redhat.com/documentation/en-us/red_hat_decision_manager/7.3/html-single/designing_a_decision_service_using_dmn_models/index#dmn-data-types-defining-proc_dmn-models
In the documentation example, the Loan_Qualification type has nested fields and these are shown as sub-columns in the table header.
My data types are defined as follows:
I have a Customer input node and a decision node defined as follows:
Yet in my decision table, the columns map to the top level object only as follows:
So any ideas as to what I might be missing? Thanks in advance.
UPDATE
I have used the answer given below by #karreiro which works for the outcome / action column, but inserting an Input Clause left or right adds a new top level column, not a sub column, which then looks like the following:
Is this something you expect the decision table editor to be able to do as well?
Your expectations are correct.
The DMN editor aims to support the auto-creation of fields for Structure Data Types (for output clauses https://issues.jboss.org/browse/DROOLS-3685, and input clauses https://issues.jboss.org/browse/DROOLS-4491).
However, momentarily, users need to create these fields manually:
See how to create here :-)

Talend Data Itegration: Avoid nulls coming out of tExtractXMLField?

I have this simple flow in Talend DI 6 (simplified for posting on SO):
The last step crashes with a NullPointerException, because missing XML attributes are returned as null.
Is there a way to get empty string values instead of nulls?
For now I'm using a tReplace step to remove nulls as a work-around, but it's tedious and adds to the cost of maintenance by creating one more place where the list of attributes needs to be maintained.
In Talend DI 5.6.2 it is possible to add default data values to the schema. The column in the schema is called "Default". If you expect strings, you can set an empty string, which is set if the column value is null:
Talend schema view with Default column
Works also for other data types. Talend DI 6 should still be able to do this, although the field might be renamed.

How to convert number to words (iReport)

I want to convert for example, 1000 to one thousand (currency). How can i do it in Jasper?
See http://www.rgagnon.com/javadetails/java-0426.html
Create a class based on the given implementation.
Compile the class and put it in a directory where iReport can read the file.
Update the CLASSPATH in iReport to point to the directory containing the class (be aware of directory relationships to package namespaces).
Restart iReport.
Change the text field expression to: EnglishNumberToWords.convert( $F{field_name} )
You will have to change field_name and the data type of the convert method according to your implementation details.
An alternative to Dave's response:
1) If your RDBMS supports it (like HSQLDB, for example) you can create a user-defined, user-invoked function that takes the data model representation for a field and converts it to a presentation-layer representation. For example, a database stores timestamps internally as Modified Julian Day numbers (doubles). A Java function can be written and stored with the database (SQL/JRT) to convert from a UTC double to a localized time/date string.
2) Write an SQL Query to produce a table containing the data you want in the report. The difference is that you use your user-invoked SQL/JRT function on the source column to convert it to the presentation-layer representation in the Result Table.
3) Use your SQL Query (once you have it working) as the basis for a CREATE VIEW (DDL) statement.
4) Build your report using the newly defined View as the iReport datasource.
Advantages:
No customization of iReports needed. The View you create can serve as the basis for any reporting tool, not only iReports.
Disadvantages:
This creates a dependency between your database and a JRE and (most likely) your RDBMS. In order to access your user-invoked function, you'll need to store the function in the database and it will need to be able to access a JRE in order to create the View. There is a SQL/JRT standard and so it is possible that your migration target RDBMS might be able to support it, but certainly this is not ever guaranteed.