I have a scenario. In One of my Master Job 3 Child Job will run. Now after Each Child Job On SubJobOk in toraclerow an update will run to update status in Database for that Particular Table based upon Maximum(Update_Date). In next tOracleInput I am selecting the status of the Table which ran in the First Child Job and passing to that tjavarow where I am passing that Status Value to a Context.
From tjavarow using RunIF condition I am triggering the 2nd ChildJob. Now I am using the RunIF Condition like context.status.equals("Y"). Now I want to do a negative testing where I want to loop based upon some iteration to check for some time whether Status Value changed to N to Y.
My Job Design is like this now:
ChildJob1---OnComponentOk---toracleRow(UpdateRunning to change status from N to Y)---OnComponentOk---toracleinput(selecting the status of the table related to ChildJob1)---->tjavarow(Passing the STATUS Value to context)--------RunIF(context value equals "Y")----->ChildJob2
I want to loop now whether the status is still N. It will check whether Status changed to Y or not. Zero Byte file based option can't be taken.
I'm not sure this is what you want; if you want to loop on a child job until a flag is set by it, you can do this :
tLoop (while context.status.equals("N"))-- Iterate -- tRunJob (childjob) -- row -- tJavaRow (context.status = row.status)
inside the tLoop, set loop type as While, and in Condition, type context.status.equals("N") (and clearing Declaration and Iteration boxes).
Your child job passes the status to the calling job using a tBufferOutput which has a single column status. Then the tRunJob needs to have the same schema as the tBufferOutput, and the checkbox "Propagate the child result to the output schema" checked. The status returned by the child job is then assigned to context.status. On the next iteration, tLoop either continues looping if context.status is equal to "N" or stops if it's "Y".
To ensure you child job runs at least once, context.status needs to be set to "N" first.
Related
In my batch process data from a sql database has to be selected and be exported as a xml file. Therefore, I have to select all data for one parent element, to be able to export the parent node and all child nodes as xml.
I have a table like the following example:
|key|parent|child|
------------------
|yxc|par001|chi01|
|xcv|par001|chi02|
|cvb|par002|chi03|
|vbn|par003|chi04|
|bnm|par003|chi05|
Now I want to select every parent and its child elements. These should be processed after each other. For the above example table it should be: par001 -> par002 -> par003. The xml that will be exported should look like the following:
<par001>
<chi01></chi01>
<chi02></chi02>
</par001>
<par002>
<chi03></chi03>
</par002>
...
How can I select the data so that I can process each parent element after each other? Is this possible with a JpaItemReader?
I would break the problem down into two steps:
step 1 does a select distinct(parent) from your_table and stores the result in the job execution context (the result is a list of Strings or IDs, not entire items, so it's fine to store them in the execution context in order to share them with the next step)
step 2 reads parent IDs from the execution context and iterate over them using an item reader. An item processor would enrich each item with its children before passing enriched items to a StaxEventItemWriter
I have used a look up activity to pass the value to the for each iteration activity. The output values from Lookup is generated from a SQL table. Once the iteration starts if one of the activity inside the for each fails, the for each iterator tries to run it for the number of times, the lookup output value is available. How do I come out of the loop? I have removed the records from the SQL table, to come out of the loop, but the loop continues to run. How can I clear the For Each Items set when an inner activity fails?
REgards,
Sandeep
How can I clear the For Each Items set when an inner activity fails?
No, we can't. For Each active doesn't support break for now even if the internal active failed.
Many users have post same questions in stack overflow and Data Factory feedback:
It' voted up 31 times but still with no respond of the Data Factory Product Team.
Ref: https://feedback.azure.com/forums/270578-data-factory/suggestions/39673909-foreach-activity-allow-break
Update:
Congratulations that you have found a solution for you scenario:
"Now used an until activity by comparing the variable values and count of files out put from a lookup activity to resolve the issue."
I post it in answer and this can be beneficial to other community members.
Hope this helps.
I have replaced the for each loop with the until activity. The input for the until activity was a SQL query which returns the count of records from the table where the file names are copied and a variable value. Used the #greater expression with Variable value, and lookup activity value. Inside the loop created logic to increment the value of the variable using a temp variable and add expression. If an expression fails, marked the variable value greater than the lookup output value.
When I try to update a database with some modified values in a dataset, a concurrence exception doesn't raise if i change manually some values in the database after the fill method on the dataset is performed. (I only get the exception if I delete a row manually and then I try call the method update of data adapter).
How should I check if I have a "dirty read" on my dataset?.
You have several options.
Keeping a copy of the original entity set to compare against and make
the is dirty determination at the point you need to know whether it's
dirty.
Checking the datarow's rowstate property and if Modified compare
the values in the Current and Original DataRowVersions. If the second
change makes the value the same as the original then you could call
RejectChanges, however that would reject all changes on the row. You
will have to manually track each field since the dataset only keeps
per-row or per-table changes.
Need to validate values within a field against Value List, retaining the value if on list but substituting a specific Value if not?
I am afraid you are mixing two separate things:
Validation checks if some conditions have been satisfied; if not, it throws an error. It will not correct the entry.
If you want user entry to be corrected, you need to either:
define the field to auto-enter a calculated value; or
attach a script trigger to it, and have the script modify the value entered by the user.
In this case, you could auto-enter a calculated value (replacing existing value) =
If ( IsEmpty ( FilterValues ( Self ; ValueListItems ( Get (FileName) ; "YourValueList" ) ) ) ; "Specific Value" ; Self )
--- Added in response to your clarification ---
Technically, you could run a script to find the records you want to verify and do Replace Field Contents (using the same calculation) on that field. You could run the script after changing the value list, as part of the weekly routine.
However, there are two major problems with this approach:
some records could be locked by another user;
you have no history of what happened, and no way to go back in case of making a mistake.
I also don't think it's good practice to have users modify a value list routinely. If you need to have a weekly list of values, you should store them in records, not in a value list. That way at least the part of the value list would have a history.
Another option you may consider is using an unstored calculation field with a similar formula. This would change dynamically with the value list, and leave the original field unmodified. This would be a good arrangement if, for example, you need to export the corrected values every week.
Ok, I have a question relating to an issue I've previously had. I know how to fix it, but we are having problems trying to reproduce the error.
We have a series of procedures that create records based on other records. The records are linked to the primary record by way of a link_id. In a procedure that grabs this link_id, the query is
select #p_link_id = id --of the parent
from table
where thingy_id = (blah)
Now, there are multiple rows in the table for the activity. Some can be cancelled. The code I have doesn't disinclude cancelled rows in the select statement, so if there are previously cancelled rows, those ids will appear in the select. There is always going to be one 'open' record that is selected if I disinclude cancelled rows. (append where status != 'C')
This solves this issue. However, I need to be able to reproduce the issue in our development environment.
I've gone through a process where I've entered a whole heap of data, opening, cancelling, etc to try and get this select statement to return an invalid id. However, whenever I run the select, the ids are in order (sequence generated), but in the case where this error occured, the select statement returned what seems to be the first value into the variable.
For example.
ID Status
1 Cancelled
2 Cancelled
3 Cancelled
4 Open
Given the above, if I do a select for the ID I want, I want to get '4'. In the error, the result is 1. However, even if I enter in 10 cancelled records, I still get the last one in the select.
In oracle, I know that if you select into a variable and more than one record is returned, you get an error (I think). Sybase apparently can assign multiple values into a variable without erroring.
I'm thinking that there's either something to do with how the data is selected from the table, where the id's without a sort order don't return in ascending order, or there's a dboption where a select into a variable will save the first or last value queried.
Edit: it looks like we can reproduce this error by rolling back stored procedure changes. However, the procs don't go anywhere near this link_id column. Is it possible that changes to the database architecture could break an index or something?
If more than one row is returned, the value that is stored will be the last value in the list, according to this.
If you haven't specified an order for retrieval via ORDER BY, then the order returned will be at the convenience of the database engine. It may very well vary by the database instance. It may be in the order created, or even appear "random" because of where the data is placed within the database block structure.
The moral of the story:
Always make singleton SELECTs return a single row
When #1 can't be done, use an ORDER BY to make sure the one you care about comes last