Problems with insertion of data in F-02 batch input ABAP - ui-automation

I have the next Batch Input (BI) that register payments (F-02).
In the first position header data bank and the amount are recorded.
The first condition of a customer payment is recorded.
In the last condition other payment of a customer is recorded.
In the latter condition it does not run anything, someone could help me please?
Point 1 and 2 perform the same functionality with different values ​​only
PERFORM OPEN_GROUP.
PERFORM BDC_DYNPRO USING 'SAPMF05A' '0100'.
PERFORM BDC_FIELD USING 'BKPF-BLDAT' p_bldat.
PERFORM BDC_FIELD USING 'BKPF-BUDAT' p_budat.
PERFORM BDC_FIELD USING 'BKPF-XBLNR' p_xblnr.
PERFORM BDC_FIELD USING 'BKPF-BKTXT' p_bktxt.
PERFORM BDC_FIELD USING 'BKPF-BLART' p_blart.
PERFORM BDC_FIELD USING 'BKPF-MONAT' p_monat.
PERFORM BDC_FIELD USING 'BKPF-BUKRS' p_bukrs.
PERFORM BDC_FIELD USING 'BKPF-WAERS' p_waers.
PERFORM BDC_FIELD USING 'RF05A-NEWBS' '40'.
PERFORM BDC_FIELD USING 'RF05A-NEWKO' p_newko.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '/00'.
PERFORM BDC_DYNPRO USING 'SAPMF05A' '0300'.
WRITE p_wrbtr TO v_importe.
PERFORM BDC_FIELD USING 'BSEG-WRBTR' v_importe.
PERFORM BDC_FIELD USING 'BSEG-SGTXT' p_sgtxt.
IF NOT p_cwrbtr IS INITIAL.
PERFORM BDC_FIELD USING 'RF05A-NEWBS' '15'.
PERFORM BDC_FIELD USING 'RF05A-NEWKO' p_cnewko.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '/00'.
PERFORM BDC_DYNPRO USING 'SAPLKACB' '0002'.
PERFORM BDC_FIELD USING 'COBL-GSBER' p_gsber.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '=ENTE'.
PERFORM BDC_DYNPRO USING 'SAPMF05A' '0301'.
WRITE p_cwrbtr TO v_cimporte.
PERFORM BDC_FIELD USING 'BSEG-WRBTR' v_cimporte.
PERFORM BDC_FIELD USING 'BSEG-GSBER' p_gsber.
PERFORM BDC_FIELD USING 'BSEG-ZFBDT' p_budat.
PERFORM BDC_FIELD USING 'BSEG-SGTXT' p_csgtxt.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '/00'.
ENDIF.
IF NOT p_owrbtr IS INITIAL.
PERFORM BDC_FIELD USING 'RF05A-NEWBS' '15'.
PERFORM BDC_FIELD USING 'RF05A-NEWKO' '511'.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '/00'.
PERFORM BDC_DYNPRO USING 'SAPLKACB' '0002'.
PERFORM BDC_FIELD USING 'COBL-GSBER' p_gsber.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '=ENTE'.
PERFORM BDC_DYNPRO USING 'SAPMF05A' '0301'.
WRITE p_owrbtr TO v_oimporte.
PERFORM BDC_FIELD USING 'BSEG-WRBTR' v_oimporte.
PERFORM BDC_FIELD USING 'BSEG-GSBER' p_gsber.
PERFORM BDC_FIELD USING 'BSEG-ZFBDT' p_budat.
PERFORM BDC_FIELD USING 'BSEG-SGTXT' p_osgtxt.
PERFORM BDC_FIELD USING 'BDC_OKCODE' '=BU'.
ENDIF.
PERFORM BDC_TRANSACTION USING 'F-02'.

Related

ADF - what's the best way to execute one from a list of Data Flow activities based on a condition

I have 20 file formats and 1 Data Flow activity that maps to each one of them. Based on the file name, I know which data flow activity to execute. Is the only way to handle this through a "Switch" activity? Is there another way? e.g. can I parameterize the data flow to execute by a variable name?:
Unfortunately , there is no option to run one out of list of dataflows based on input condition.
To perform data migration and transformation for multiple tables, you can use same dataflow and parameterize the dataflow by providing the table names either during the runtime or use a control table to hold all the tablenames and inside foreach , call the dataflow activity. In the sink settings, use merge schema option.

Running Goal Seek Function in Excel using Powershell

I need to use the goal seek function in Excel to process several calculations in background process.
Already using approach using while looping and search value itself by changing cell and check the output value.
I wonder if there is any other option to calculate goal seek value with background process, for example trigger goal seek using PowerShell or anything tools.

How to implement conditional branches in Azure Data Factory pipelines

I am implementing a pipeline to insert data updates from csv files to SQL DB. Plan is to first insert the data to temporary SQL table for validation and transformation, and then move processed data to actual SQL table. I would like to branch the pipeline execution depending on the validation result. If data is OK, it will be inserted to target SQL table. If there are fatal fails, insertion activity should be skipped.
Tried to find instructions / guidance but no success so far. Any ideas if pipeline activity supports conditional execution, e.g. based on some properties in input dataset?
It is possible now with Azure Data Factory ver 2.
Post execution our downstream activities can now be dependent on four possible outcomes as standard.
- On success
- On failure
- On completion
- On skip
Also, custom ‘if’ conditions will be available for branching based expressions.
Refer below links for more detail:-
https://www.purplefrogsystems.com/paul/2017/09/whats-new-in-azure-data-factory-version-2-adfv2/
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-control-flow
The short answer is no.
I think its worth pointing out that ADF is just an orchestration tool to invoke other services. The current version can't do what you want because it does not have any of its own compute. Its not an SSIS data flow engine.
If you want this behaviour you'll need to code it into the SQL DB stored procedures with flags etc on the processed datasets.
Then maybe have some boiler plate code with a parameters that are passed from ADF to perform either the insert or update or divert operation.
Handy link for called stored procedure with params from ADF: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-stored-proc-activity
Hope this helps.

Execute database operations inside a chunck orientad step

I have a chunk oriented processor in the form "reader / processor / writer" called Job1. I have to execute database EJB operations after this job ends, if possible, in the same transaction. I have others jobs (implemented by Tasklets) that I could do this in a simply manner. I this jobs I simply call this operations in tasklet, before finish exeute method. But in this case I don't know the right way to do. In a first try I implemented it by a step listener (outside transaction). But I cannot, because there are uma architecture rule in my company to don't call database operations in listeners. I could execute it after this step in another step in a tasklet and I will come this way if I don't find a better one, but moreover if it's possible I like to execute this operations in the same transaction of Job1.
A couple notes:
In a chunk based step (reader/processor/writer), typically you'll have multiple transactions. One for each chunk.
Because of 1, you typically can't do a db call in at the end of a step that is in the same transaction as the items were processed in. They were processed in multiple transactions.
That being said, from what it sounds like, the best option would be to put your call in another step after the chunk based one.

SSIS Access DB data into TSQL with a stored procedure

I want to use a Data Flow Task that has two sources one from an Access DB and one from a SQL Server DB. I then want to manipulate the data and finally call a stored procedure on the same SQL Server that was my second source.
So I can't use an Execute SQL Task since I want to manipulate the data and call the stored procedure at the end.
What toolbox component would I use and what format would the stored procedure call?
I tried to do an OLE DB Destination with a stored procedure called something like this.
Exec sppUpdateAATable1 ?,?
SSIS uses concept of a pipeline to organise DataFlow task. Data flows from source to destination and processing of this data happens in between. Since you want to use the result of processing in your stored procedure as parameters, it cannot be done under pipeline concept. SP is not really a destination for your data, as SP will do something with it too.
You can populate an in-memory Recordset (destination), and use ForEach loop container to execute your SP for each row of recordset.
Update
You package should look something like this:
Data Flow task:
OLE DB connection to Access
OLE DB connection to SQL Server
To combine 2 data streams use UNOIN task
Record set destination, in properties you name a variable of type Object (MyRecordsetVar). It will hold recordset data.
ForEach Loop Container. In properties select type of loop container - ADO Recorset, specify MyRecordsetVar variable in loop properties.
Assign two more (or as many as needed) variables to hold data from each column of the recordset. Data from each row of the recordset willbe passed to these variables. One row at a time.
Inside the loop put Execute SQL task. In Input "menu" of the task specify your INPUT variables - those that have data from columns of recordset. I would assume that you know how to do it.
Put your query into the task as execute sp_MyProc ?,?.
This should be it.
You can save yourself the trouble of the recordset destination and foreach loop route and instead use an OLE DB Command in your Data Flow. It's going to fire that stored proc for each row that flows through it.