Dynamic file creation in COBOL/JCL - db2

I have an requirement, Where I have to read DB2 table and create multiple output file, one for each program name in the table. We don't know how many unique program name in the table. My job will run every 4 hrs. So for eg: my first run may have 10 program name and I will have to create 10 output file and second run may 20 program name and 20 output files.
I'm looking for a dynamic way to create DD name and file name in the JCL as well in my COBOL program. So I don't want to define 20 or max DD statement in my JCL, as this 20 can be 50,60....
Please help me with the possibilities.

One option is BPXWDYN, which is callable from COBOL, et. al. A COBOL example is here.

I'm not aware of a canned service that will do what you want. You can explore how to call a program that would do the dynamic allocation for you. Here is a starting point in the z/OS documentation.
This example is a 'C' program that will execute a dynamic allocation based on values from the caller.
It's not turnkey but is worth exploring.

Related

Logging a counter value to a batch name in siemens TIA Portal

I need to create a program for 1214 PLC in TIA Portal and a Comfort HMI that counts several products using a count up and stores that value to a specific batch name.
For every new batch, the operator would enter a new batch name, and the counter will count the products for that specific batch.
The count needs to be displayed on the HMI screen along with the history of batches and the associated final count number.
So basically, I need a way to attach a name (batch_id) to a final count and log that pair for later reference.
Can someone give me some advice as to how I would do that?
To clarify, I need help with storing and displaying the counter value and batch names, not with the counting itself.
I appreciate any help you can provide.
There are a few ways to do this (yes, you can use PLC data logs and no they don't have to create a separate file for each batch), but I am posting here what I would do, because it's convenient for data backups, I have taken this approach before, and know it works.
Write the count value (generated in the PLC), the batch value and the timestamp to a CSV file on a USB drive inserted into the Comfort HMI, using VBScripts on the HMI.
Split the files regularly - e.g. daily, weekly or monthly, to minimize the risk of any single file becoming corrupt and you losing the data. More detail follows.
Data Storage:
Count is calculated in the PLC. Batch ID and timestamp can be stored in the PLC (if you want it to be retentive after a power cut), or in the HMI.
You will have Comfort HMI tags representing each of these three values. Once a batch is complete, call a VB script that writes the values of these values to CSV file. There are application examples and forum entries on SIOS about this.
Data display as a table:
Read the CSV file values according to your filter criteria (day, time range, batch ID, batch ID range, etc) using a VB script. Write to internal HMI tags.
Display these internal HMI tags as IO fields on a Comfort panel screen. This is your custom-built table and yes it's the only way to do it unless you want to create a custom control and install it on the panel.
Backing up:
Disable logging and check USB is not in use using a script, e.g. this: https://support.industry.siemens.com/cs/document/89855157
Remove the USB, copy the files, re-insert it and activate logging again.
(you implement the 'disable' and 'activate' logging features, e.g. using an internal BOOL tag that prevents a script from executing).
There is a lot of info on SIOS about these topics, as Application Examples, FAQs and forum entries.
support.industry.siemens.com
The PLC log method works, but data backup and especially display can become a pain.

Deploying DB2 user define functions in sequence of dependency

We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.

Using Talend Open Studio DI to extract extract value from unique 1st row before continuing to process columns

I have a number of excel files where there is a line of text (and blank row) above the header row for the table.
What would be the best way to process the file so I can extract the text from that row AND include it as a column when appending multiple files? Is it possible without having to process each file twice?
Example
This file was created on machine A on 01/02/2013
Task|Quantity|ErrorRate
0102|4550|6 per minute
0103|4004|5 per minute
And end up with the data from multiple similar files
Task|Quantity|ErrorRate|Machine|Date
0102|4550|6 per minute|machine A|01/02/2013
0103|4004|5 per minute|machine A|01/02/2013
0467|1264|2 per minute|machine D|02/02/2013
I put together a small, crude sample of how it can be done. I call it crude because a. it is not dynamic, you can add more files to process but you need to know how many files in advance of building your job, and b. it shows the basic concept, but would require more work to suite your needs. For example, in my test files I simply have "MachineA" or "MachineB" in the first line. You will need to parse that data out to obtain the machine name and the date.
But here is how may sample works. Each Excel is setup as two inputs. For the header the tFileInput_Excel is configured to read only the first line while the body tFileInput_Excel is configured to start reading at line 4.
In the tMap they are combined (not joined) into the output schema. This is done for the Machine A Excel and Machine B excels, then those tMaps are combined with a tUnite for the final output.
As you can see in the log row the data is combined and includes the header info.

How to send an email from IBM iSeries DB2 v7.1

I am trying to create a trigger that sends an email based on a database event, specifically, when a record is INSERTed in a certain table, I want an email stating that fact to go to the SysAdmin.
I can successfully do the following from a SQL window in iSeries Navigator:
CL:SNDDST TYPE(*LMSG)
TOINTNET(('sysadmin#mycompany.com'))
DSTD('this is the Subject Line')
LONGMSG('This is an Email sent from iSeries box via Navigator')
...and an email gets sent. Which means that the necessary SMTP stuff is there and working.
So all I'm trying to do is encapsulate this code, perhaps with some data changes (e.g. "A record has been added to the XYZ table on whatever-the-sysdate-is"). Navigator has some tantalizing examples that call CL to do some plain-vanilla things, but no clue as to how to make it work in a trigger. I know how to write triggers that do "database stuff", but not this CL stuff. And this is iSeries DB2, so I don't have access to UTL_MAIL.
I know next to nothing about CL, DDS or other iSeries internals... I would prefer not to have to create an external Java program, but will do that as a last resort...but even then, I'm having a hard time finding straightforward examples.
thanks in advance.
First off, note that SNDDST isn't the best choice for internet mail from the IBM i. Basically, SNDDST is a relic from the SNADS networking days that IBM hacked into supporting SMTP emails. There are free alternatives, or if you're reasonably current on fixes for 7.1 then you should have the Send SMTP E-mail (SNDSMTPEMM) command available.
The Run SQL Scripts window of iNav does indeed support CL commands using the CL: prefix. But that's not the same thing as having the query engine itself understand CL.
The CL: prefix isn't going to work inside an SQL trigger.
You could however,use the QCMDEXC stored procedure to call a CL command. But I wouldn't necessarily call that the best option.
The IBM i supports using "external" stored procedures and triggers. Theoretically, you could use a CL program that invokes the SNDSMTPEMM command directly. But given you desires to include data from the table, I wouldn't recommend that approach as you'd be tied to the table structure.
Instead, create your own UTLMAILSND CL program that invokes SNDSMTPEMM. Then defined the UTLMAILSND program as an external stored procedure (you can even give it a longer SQL name of UTIL_MAIL_SEND).
Now you can call your UTIL_MAIL_SEND() procedure from your SQL trigger.
You need to try the SNDSMTPEMM command. It's like sliced bread compared to SNDDST TYPE(*LMSG) It supports HTML too which makes for a lot of fun.
Yes, I used SNDSMPTEMM (skipping the html for now...).
One big note, however: using this command in a CL program doesn't work when being called from SQL. I had to change it to a CLLE program.
So the final answer is as follows: a) an INSERT trigger on the table in question, which calls: b) an (external) PROCEDURE created in the database, which in turn calls: c) the compiled CLLE program object. Works like a charm.
p.s. I create the whole body of the email in the INSERT trigger, and pass it along, eventually to the CLLE program. This allows me to have just this one CLLE program to report on any INSERT/UPDATE/DELETE anywhere in the database.

Explain the steps for db2-cobol's execution process if both are db2 -cobol programs

How to run two sub programs from a main program if both are db2-cobol programs?
My main program named 'Mainpgm1', which is calling my subprograms named 'subpgm1' and 'subpgm2' which are a called programs and I preferred static call only.
Actually, I am now using a statement called package instead of a plan and one member, both in 'db2bind'(bind program) along with one dbrmlib which is having a dsn name.
The main problem is that What are the changes affected in 'db2bind' while I am binding both the db2-cobol programs.
Similarly, in the 'DB2RUN'(run program) too.
Each program (or subprogram) that contains SQL needs to be pre-processed to create a DBRM. The DBRM is then bound into
a PLAN that is accessed by a LOAD module at run time to obtain the correct DB/2 access
paths for the SQL statements it contains.
You have gone from having all of your SQL in one program to several sub-programs. The basic process
remains the same - you need a PLAN to run the program.
DBA's often suggest that if you have several sub-programs containing SQL that you
create PACKAGES for them and then bind the PACKAGES into a PLAN. What was once a one
step process is now two:
Bind DBRM into a PACKAGE
Bind PACKAGES into a PLAN
What is the big deal with PACKAGES?
Suppose you have 50 sub-programs containing SQL. If you create
a DBRM for each of them and then bind all 50 into a PLAN, as a single operation, it is going
to take a lot of resources to build the PLAN because every SQL statement in every program needs
to be analyzed and access paths created for them. This isn't so bad when all 50 sub-programs are new
or have been changed. However, if you have a relatively stable system and want to change 1 sub-program you
end up reBINDing all 50 DBRMS to create the PLAN - even though 49 of the 50 have not changed and
will end up using exactly the same access paths. This isn't a very good apporach. It is analagous to compiling
all 50 sub-programs every time you make a change to any one of them.
However, if you create a PACKAGE for each sub-program, the PACKAGE is what takes the real work to build.
It contains all the access paths for its associated DBRM. Now if you change just 1 sub-program you
only have to rebuild its PACKAGE by rebinding a single DBRM into the PACKAGE collection and then reBIND the PLAN.
However, binding a set of PACKAGES (collection) into a PLAN
is a whole lot less resource intensive than binding all the DBRM's in the system.
Once you have a PLAN containing all of the access paths used in your program, just use it. It doesn't matter
if the SQL being executed is from subprogram1 or subprogram2. As long as you have associated the PLAN
to the LOAD that is being run it should all work out.
Every installation has its own naming conventions and standards for setting up PACKAGES, COLLECTIONS and
PLANS. You should review these with your Data Base Administrator before going much further.
Here is some background information concerning program preperation in a DB/2 environment:
Developing your Application