Azure Data Flow ( Data Flow) - First row field value as custom field to remaining rows - azure-data-flow

I am creating DataFlow in ADF. my requirement is to read first row one field value and make it as session id for rest of the rows. I looked into the expressions but didn't find much functions that will help on this.
ex: Source file in blob :---------------
time ,phone
2020-01-31 10:00:00,1234567890
2020-01-31 10:10:00,9876543219
Target should be :-----------------
SessionID , time, Phone
20200131100000,2020-01-31 10:00:00,1234567890
20200131100000,2020-01-31 10:10:00,9876543219
SessionIID is a derived column. i need to read first row of time and remember that time and apply to all rows for sessionID.
How to read first row time value and keep it in global variable ?
any inputs are appreciated.

You can use Lookup activity in pipeline(check First row only option) and pass time value to Data Flow parameter. Then use Derived Column transform in Data Flow to add SessionID column.
Details:
check First row only option in Lookup activity
use this expression to get your expected value:
#replace(replace(replace(activity('Lookup1').output.firstRow.time,'-',''),' ',''),':','')
3.pass value of this variable to parameter in Data Flow.
4.add Session column in Data Flow.

Related

How to map the iterator value to sink in adf

A question concerning Azure Data Factory.
I need to persist the iterator value from a lookup activity (an Id column from a sql table) to my sink together with other values.
How to do that?
I thought that I could just reference the iterator value as #{item().id} as source and a destination column name from from my sql table sink. That doesn’t seems to work. The resulting value in the destination column is NULL.
I have used 2 look up activities, one for id values and the other for remaining values. Now, to combine and insert these values to sink table, I have used the following:
The ids look up activity output is as following:
I have one more column to combine with above id values. The following is the look up output for that:
I have given the following dynamic content as the items value in for each as following:
#range(0,length(activity('ids').output.value))
Inside for each activity, I have given the following script activity query to insert data as required into sink table:
insert into t1 values(#{activity('ids').output.value[item()].id},'#{activity('remaining rows').output.value[item()].gname}')
The data would be inserted successfully and the following is the reference image of the same:

How to format the negative values in dataflow?

I have below column in my table
I need an output as below
I am using Dataflow in the Azure data factory and unable to get the above output. I used derived column but no success. I used replace function, but it's not coming correct. Can anyone advise how to format this in dataflow?
Source is taken in data flow with data as in below image.
Derived column transformation is added next to source.
New column is added and the expression is given as
iif(left(id,1)=='-', replace(replace(id,"USD",""),"-","-$"), concat("$", replace(id,"USD","")))
Output of Derived Column activity

Throw error on invalid lookup in Talend job that populates an output table

I have a tMap component in a Talend job. The objective is to get a row from an input table, perform a column lookup in another input table, and write an output table populating one of the columns with the retrieved value (see screenshot below).
If the lookup is unsuccessful, I generate a row in an "invalid rows" table. This works fine however is not the solution I'm looking for.
Instead, I want to stop the entire process and throw an error on the first unsuccessful lookup. Is this possible in Talend? The error that is thrown should contain the value that failed the lookup.
UPDATE
A tfileoutputdelimited componenent would do the staff .
So ,the flow would be as such tMap ->invalid_row->tfileoutputdelimited -> tdie
Note : that you have to go to advanced settings in the tfileoutputdelimited component aand tick split output into multiple files option and put 1 rather then 1000
For more flexibility , simply do two tmap order than one tMap

How to assign csv field value to SQL query written inside table input step in Pentaho Spoon

I am pretty new to Pentaho so my query might sound very novice.
I have written a transformation in which am using CSV file input step and table input step.
Steps I followed:
Initially, I created a parameter in transformation properties. The
parameter birthdate doesn't have any default value set.
I have used this parameter in postgresql query in table input step
in the following manner:
select * from person where EXTRACT(YEAR FROM birthdate) > ${birthdate};
I am reading the CSV file using CSV file input step. How do I assign the birthdate value which is present in my CSV file to the parameter which I created in the transformation?
(OR)
Could you guide me the process of assigning the CSV field value directly to the SQL query used in the table input step without the use of a parameter?
TLDR;
I recommend using a "database join" step like in my third suggestion below.
See the last image for reference
First idea - Using Table Input as originally asked
Well, you don't need any parameter for that, unless you are going to provide the value for that parameter when asking the transformation to run. If you need to read data from a CSV you can do that with this approach.
First, read your CSV and make sure your rows are ok.
After that, use a select values to keep only the columns to be used as parameters.
In the table input, use a placeholder (?) to determine where to place the data and ask it to run for each row that it receives from the source step.
Just keep in ming that the order of columns received by the table input (the columns out of the select values) is the same order that it will be used for the placeholders (?). This should not be a problem with your question that uses only one placeholder, but keep that in mind as you ramp up using Pentaho.
Second idea, using a Database Lookup
This is another approach where you can't personalize the query made to the database and may experience a better performance because you can set a "Enable cache" flag and if you don't need to use a function on your where clause this is really recommended.
Third idea, using a Database Join
That is my recommended approach if you need a function on your where clause. It looks a lot like the Table Input approach but you can skip the select values step and select what columns to use, repeat the same column a bunch of times and enable a "outer join" flag that returns the rows without result from the query
ProTip: If you feel the transformation running too slow, try to use multiple copies from the step (documentation here) and obviously make sure the table have the appropriate indexes in place.
Yes there's a way of assigning directly without the use of parameter. Do as follows.
Use Block this step until steps finish to halt the table input step till csv input step completes.
Following is how you configure each step.
Note:
Postgres query should be select * from person where EXTRACT(YEAR
FROM birthdate) > ?::integer
Check Execute for each row and Replace variables in in Table input step.
Select only the birthday column in CSV input step.

Passing value from TpostgresSql to context variable

I need to pass value from Tpostgressql to context variable,so that context variable value can be used in other components
The query used in tpostgres is :
select max(started_on) started_on from etl_log
I have created a context variable started_on_date (date datatype)
In the Tjavarow :-
context.started_on_date =row1.started_on
But it throws
error created_on variable cannot be resolved or is not a field
Have you defined the schema in the tPostrgesqlInput component ? If not, that needs to be done first. Afterward, synchronize the schema of the tJavaRow. You can use the Java row's code generation feature, if appropriate .
Question / if you want to do row-based processing in the same job, there is likely no need to put the started date in the context.
If you want to do non-row based processing, you can used the tJavaRow component to put the value in the globalMap. This assumes there is only one row of data or that you only care about the last row. Then, you can use that value in other components which are not processing a flow (rows). tJava is an example of that.