In my batch process data from a sql database has to be selected and be exported as a xml file. Therefore, I have to select all data for one parent element, to be able to export the parent node and all child nodes as xml.
I have a table like the following example:
|key|parent|child|
------------------
|yxc|par001|chi01|
|xcv|par001|chi02|
|cvb|par002|chi03|
|vbn|par003|chi04|
|bnm|par003|chi05|
Now I want to select every parent and its child elements. These should be processed after each other. For the above example table it should be: par001 -> par002 -> par003. The xml that will be exported should look like the following:
<par001>
<chi01></chi01>
<chi02></chi02>
</par001>
<par002>
<chi03></chi03>
</par002>
...
How can I select the data so that I can process each parent element after each other? Is this possible with a JpaItemReader?
I would break the problem down into two steps:
step 1 does a select distinct(parent) from your_table and stores the result in the job execution context (the result is a list of Strings or IDs, not entire items, so it's fine to store them in the execution context in order to share them with the next step)
step 2 reads parent IDs from the execution context and iterate over them using an item reader. An item processor would enrich each item with its children before passing enriched items to a StaxEventItemWriter
Related
I have a PostgreSQL table in an application that holds both parent and child records. There is a column in the table to reference the the parent id where applicable for each child record. The problem is I am trying to import data from an external source where the child record is made up of a sub number of the parent. eg parent_reference_id = 123456000000 and a child_reference record for this could 123456000001, 123456000002 and so on. The application itself generates a unique id for each record when I import the data and so its possible to import the child and parent records simultaneously, however the difficulty I'm facing is linking the application generated id for the parent record to the parent_reference_id for the corresponding child records. The only hook I have is that the 1st six digits of the child_value_reference match the 1st six digits of the parent_value_reference and I've tried something like foo = bar(left(value,6)||'000000'; to create a match. However, I don't know how to use this to return the unique_id in a meaningful way and update the matching records. I've tried temporary tables and cte, however my knowledge of postgres is limited and I can't seem to find a solution that fits my problem. Another thing to mention is that these groups can change with updates within the external data so i'd also need a solution to make those updates too. Thanks in advance, Crispian
In Talent open studio. i have to add different source file to one output table..How can i fetch the last id in that output table and generate the very next id and continue insertion with different sources
Add a subjob prior to your current subjob.
in a tDBINPut component select your lastID through a query (select maxID , select top 1 , etc) .
Put the result in a variable (global variable or context variable), in a tJavaRow for example. (context.lastID=input_row.id)
Use this variable in a tMap to generate the next ID, through Numeric.sequence function.
In the output mapping of your tmap, you should add something like Numeric.sequence(context.lastID,1,1)
I think there are plenty of solutions to get the lastID and generate a sequence from there. you can also check advanced output parameters on your tDBOutput.
I have a simple ADF pipeline which contains 1 lookup (which loads the name of tables to be migrated) and a ForEach activity (Which contains copy activity and a function App to loads data in BQ). I want to get the Iteration ID and want to send it to Azure function App.
Let say the Lookup returns a JSON with three tables in it (A,B,C) I want to get the iteration id inside the foreach loop for example 1 for A and 2 for B and 3 for C.
Any help on this will be highly appreciated.
I agree this is a common requirement,but it seems no direct way to get the array index inside the for-each activity. However,you could try my little trick with AzureFunction Activity.
Step1: Create a text file (named as index.txt)in the some blob storage path and store 1 value in it(for using it as array index)
Step2: Inside the For-each Activity, use LookUp Activity to read the value of index.txt. First time, it is 1.
Step3: After that, execute an Azure Function Activity to change the value --plus 1.So that,next time it is 2.
Step4: When you finish For-each Activity,you could reset the value as 0 by Azure Function Activity.
No need to create 2 azure functions,just 1. You could pass a boolean parameter to distinct whether this invoke is for reset or plus.
In the lookup table from which I was going to pick the Source and destination tables/databases. I added another column with the Iterator number like 1, 2,3,4 for each row in the Source table from which the lookup activities is retrieving the data.
Then inside Azure data factory, I read that column inside the Foreach loop. For each of the Source and Destination tables I have a self made Iterator and used that for my purpose. It worked perfectly fine for me.
I'm new to APEX. I am sure this solution might be available, but was unable to find a proper answer anywhere.
Here is the case scenario: (Provide user a capability to update a particular report row).
Page 1 has report A in which there is a particular column say column B which has links to all its row, by clicking on one particular row user navigates to the new page (Page2).
Page 2 has single entry form in region one which has list of items (around 10) and on region 2 of the same page (page 2) it has a tabular form.
Some of the items in region 1 of page 2 are populated based on the information from page 1 report details. Some items have LOVs and some item user can add information.
The tabular form in region 2 of page 2 is generated based on line item id which can be edited by the user. The tabular form is associated to one table only.
There are two buttons on the page, cancel button takes back to the report page, whereas the save button will save the data to the database tables. The single form items will update 2 tables, whereas tabular form will update one tables.
How the process needs to be establish for updating the underlying tables through apex.
Right now Tabular form has MRU update built in process(but I am not sure can I use this process in coordination with single entry form or it is better to create a separate process which handles both updates)
Can anybody give me an idea how this can be accomplished, or a link where such process has been explained?
You will need to manually create plsql processes to process the submitted values and apply them to your tables. You can not use the built-in row processing to do this: you can not define 2 per page. (That makes sense because you can not indicate which column maps to which table. You can only define "database column" as source for an item. This means that even if you were to have 2 processes, these columns would be attempted to process on both processes, which would lead to errors.)
Take a look at this post for some ideas on how to set the processes up: https://stackoverflow.com/a/7877933/814048
if :P42_ORDER_STATUS in ('IP','OW') then
begin
FOR I in 1..APEX_APPLICATION.G_F01.COUNT LOOP
update sales_mst set ORDER_STATUS = 'DR'
where id = to_number(apex_application.g_f01(i));
end loop;
end;
end if;
I have two tables in APEX that are linked by their primary key. One table (APEX_MAIN) holds the basic metadata of a document in our system and the other (APEX_DATES) holds important dates related to that document's processing.
For my team I have created a contrl panel where they can interact with all of this data. The issue is that right now they alter the information in APEX_MAIN on a page then they alter APEX_DATES on another. I would really like to be able to have these forms on the same page and submit updates to their respective tables & rows with a single submit button. I have set this up currently using two different regions on the same page but I am getting errors both with the initial fetching of the rows (Which ever row is fetched 2nd seems to work but then the page items in the form that was fetched 1st are empty?) and with submitting (It give some error about information in the DB having been altered since the update request was sent). Can anyone help me?
It is a limitation of the built-in Apex forms that you can only have one automated row fetch process per page, unfortunately. You can have more than one form region per page, but you have to code all the fetch and submit processing yourself if you do (not that difficult really, but you need to take care of optimistic locking etc. yourself too).
Splitting one table's form over several regions is perfectly possible, even using the built-in form functionality, because the region itself is just a layout object, it has no functionality associated with it.
Building forms manually is quite straight-forward but a bit more work.
Items
These should have the source set to "Static Text" rather than database column.
Buttons
You will need button like Create, Apply Changes, Delete that submit the page. These need unique request values so that you know which table is being processed, e.g. CREATE_EMP. You can make the buttons display conditionally, e.g. Create only when PK item is null.
Row Fetch Process
This will be a simple PL/SQL process like:
select ename, job, sal
into :p1_ename, :p1_job, :p1_sal
from emp
where empno = :p1_empno;
It will need to be conditional so that it only fires on entry to the form and not after every page load - otherwise if there are validation errors any edits will be lost. This can be controlled by a hidden item that is initially null but set to a non-null value on page load. Only fetch the row if the hidden item is null.
Submit Process(es)
You could have 3 separate processes for insert, update, delete associated with the buttons, or a single process that looks at the :request value to see what needs doing. Either way the processes will contain simple DML like:
insert into emp (empno, ename, job, sal)
values (:p1_empno, :p1_ename, :p1_job, :p1_sal);
Optimistic Locking
I omitted this above for simplicity, but one thing the built-in forms do for you is handle "optimistic locking" to prevent 2 users updating the same record simultaneously, with one's update overwriting the other's. There are various methods you can use to do this. A common one is to use OWA_OPT_LOCK.CHECKSUM to compare the record as it was when selected with as it is at the point of committing the update.
In fetch process:
select ename, job, sal, owa_opt_lock.checksum('SCOTT','EMP',ROWID)
into :p1_ename, :p1_job, :p1_sal, :p1_checksum
from emp
where empno = :p1_empno;
In submit process for update:
update emp
set job = :p1_job, sal = :p1_sal
where empno = :p1_empno
and owa_opt_lock.checksum('SCOTT','EMP',ROWID) = :p1_checksum;
if sql%rowcount = 0 then
-- handle fact that update failed e.g. raise_application_error
end if;
Another, easier solution for the fetching part is creating a view with all the feilds that you need.
The weak point is it that you later need to alter the "submit" code to insert to the tables that are the source for the view data