AZURE DATA FACTORY - Can I set a variable from within a CopyData task or by using the output? - azure-data-factory

I have simple pipeline that has a Copy activity to populate a table. That task is based on a query and will only ever return 1 row.
The problem I am having is that I want to reuse the value from one of the columns (batch number) to set a variable so that at the end of the pipeline I can use a Stored Procedure to log that the batch was processed. I would rather avoid running the query a second time in a lookup task so can I make use of the data already being returned?
I have tried duplicating the column in the Copy activity and then mapping that to something like #BatchNo but that fails and have even tried to add a Set Variable task but can't figure out how to take a single column #{activity('Populate Aleprstw').output} does not error but not sure what that will actually do in this case.
Thanks and sorry if its a silly question.
Cheers
Mark

I always do it like this:
Generate a batch number (usually with a proc)
Use a lookup to grab it into a variable
Use the batch number in all activities (might be multiple copes, procs etc.)
Write the batch completion
From your description it seems you have the batch embedded in the data copy from the start which is not typical.
If you must do it this way, is there really an issue with running a lookup again?

Copy activity doesn't return data like that, so you won't be able to capture the results that way. With this design, running the query again in a Lookup is the best option.
Is the query in the Source running on the same Server as the Sink? If so, you could collapse the entire operation into a Stored Procedure that returns the data point you are trying to capture.

Related

Export database to multiple files in same job Spring Batch

I need to export some database of arround 180k objects to JSON files so I can retain data structure in certain way that suits me for later import to other database. However because of amount of data, I wanto to separate and group data based on some atribute value from database records itself. So all records that have attribute1=value1, I want to go to value1.json, value2.json and so on.
However I still haven't figured out how to do this kind of job. I am using RepositoryItemReader and JsonFileWriter.
I started by filtering data on that attribute and running separate exports, just to verify that works, however I need to do this so I can automate whole process and let it work.
Can this be done?
There are several ways to do that. Here are a couple of options:
Option 1: parallel steps
You start by creating a tasklet that calculates the distinct values of the attribute you want to group items by, and you put this information in the job execution context.
After that, you create a flow with a chunk-oriented step for each value. Each chunk-oriented step would process a distinct value and generate an output file. The item reader and writer would be step-scoped bean and dynamically configured with the information from the job execution context.
Option 2: partitioned step
Here, you would implement a Partitioner that creates a partition for each distinct value. Each worker step would then process a distinct value and generate an output file.
Both options should perform equally in your use-case. However, option 2 is easier to implement and configure in my opinion.

Pentaho transformation swithc/sace doesnt work as suposed to be

Pentaho , i have transformatipn step
set_useful_life have select from the table and assign the result to the variable.
them switch/case, check result, and supposed to run SQL dwh_adhoc_useful_life step only if the variable is 2
I 100% sure that variable is 1 but it still execute SQL dwh_adhoc_useful_life step. and at this moment I don't understand how to work around it?
Why am I doing this check here and not in the job, because this transformation receives variables from the previous step, and if I am doing
in the job and execute transformation step, I am losing all variables and this step doesn't execute for each available passed from the previous transformation step
I think your problem is a concept of how PDI works.
In a transformation every step is initialized at the beginning, waiting to process the rows it receives from the stream, so the SQL dwh_adhoc_useful_life step is going to be initialized at the beginning, even if it doesn't receives rows.
Usually that's not a problem because it usually expects something from the stream of rows it receives, but in your case it really doesn't need anything from the rows it receives as input, so it generates its own stream of rows and follows up on it.
One way to fix it would be to get the client_ssr_config column to work as an argument to your SQL script, so it only produces results if the correct value is passed (something in the lines of adding an AND 1=client_ssr_config to your filters in the SQL clause)

Data from multiple sources and deciding destination based on the Lookup SQL data

I am trying to solve the below problem where I am getting data from different sources and trying to copy that data at single destination based on the metadata stored in SQL table. below are the steps i followed-
I have 3 REST API call and the output of those calls going as input to lookup activity.
The lookup activity is queried on SQL DB which has 3 records and pulling 2 columns only, file_name and table_name.
Then for each activity is iterating on the lookup array output and from each item, I am getting the item().file_name.
Now for each item I am trying to use Switch case to decide based on the file name what should be the destination of the data.
I am not sure how I can use the file_name coming in step 3 to use as a case in of switch activity. Can anyone please guide me on that?
You need to create a variable and save the value of file_name. Then you can use that variable in of switch activity. If you do this, please make sure your Sequential setting of For Each activity is checked.

How to increment a number from a csv and write over it

I'm wondering how to increment a number "extracted" from a field in a csv, and then rewrite the file with the number incremented.
I need this counter in a tMap.
Is the design below a good way to do it ?
EDIT: im trying a new method. see the design of my subjob below, but i have an error when i link the tjavarow to my main tmap in the main job
Exception in component tMap_1
java.lang.NullPointerException
at mod_file_02.file_02_0_1.FILE_02.tFileList_1Process(FILE_02.java:9157)
at mod_file_02.file_02_0_1.FILE_02.tRowGenerator_5Process(FILE_02.java:8226)
at mod_file_02.file_02_0_1.FILE_02.tFileInputDelimited_2Process(FILE_02.java:7340)
at mod_file_02.file_02_0_1.FILE_02.runJobInTOS(FILE_02.java:12170)
at mod_file_02.file_02_0_1.FILE_02.main(FILE_02.java:11954)
2014-08-07 12:43:35|bm9aSI|bm9aSI|bm9aSI|MOD_FILE_02|FILE_02|Default|6|Java
Exception|tMap_1|java.lang.NullPointerException:null|1
[statistics] disconnected
enter image description here
You should be able to do this mid flow in a tMap or a tJavaRow.
Simply read the number in as an integer (or other numeric data type) and then add your increment to it.
A really simple example might look like this:
Here we have a tFixedFlowInput that has some hard coded values for the job:
And we run it through a tMap where we add 1 to the age column:
And finally, we output it to the console in a table:
EDIT:
As Gabriele B has pointed out, this doesn't exactly work when reading and writing to the same flat file as Talend claims an exclusive read-write lock on the file when reading and keeps it open throughout the job.
Instead you would have to write the incremented data to some other place such as a temporary file, a database or even just to the buffer and then read that data in to a separate job which would then output the file you want and clean up anything temporary.
The problem with that is you can't do the output in the same process. I've just tried testing reading in the file in one child job, passing the data back to a parent job using a tBufferOutput and then passing that data to another child job as a context variable and then trying to output to the file. Unfortunately the file lock remains on it so you can't do this all in one self contain job (even using a parent job and several child jobs).
If this sounds horrible to you (it is) and you absolutely need this to happen (I'd suggest a database table sounds like a better match for this functionality than a flat file) then you could raise a feature request on the Talend Jira for the tFileInputDelimited to not hold the file open or to not insist on an exclusive read-write lock on the file.
Once again, I strongly recommend that you move to using a database table for this because even without the file lock issue, this is definitely not the right use of a flat file and this use case perfectly fits a database, even something as lightweight as an embedded H2 database.

Spring-batch Sharing data between executions

I'm new with Spring-batch. I have a simple, proof-of-concept, job which does something dummy.
ItemReader: Get a number of records between dateA and max date (last record at the moment)
ItemProcessor: aggregates the rows per city
ItemWriter: writes the result to the database
My question is that when the job finishes, I want to keep the last date in order for the next job execution to start from that date as dateA. I know that I can use JobExecutionContext to share data but if on the second run, I tried to get dateA but the value is null
IS there a way to get the previous dateA value recorded from the previous job execution? and how?
Thanks in advance
The following may be an alternative.
Store the date in a bean holder using JobExecutionListener#afterJob. The following link answers passing a collection at a Step level.
What's the best way to pass a huge collection to a Spring Batch Step?
There are multiple ways of implementing this. The simplest way would be just write it in a file and read that file on next start. But this is somehow dirty. You can also write it in DB or just run an sql that will return the max date from data you already have in db. Basically I would implement a simple tasklet(similar Tasklet to delete a table in spring batch ) that runs the sql and adds max data on the chunkContext.