COBOL -> COBOL/DB2 -> COBOL -> COBOL/DB2 pgm call - db2

Lets say like PGM1(cobol) calls--> PGM2(cobol-db2) calls--> PGM3(cobol)--> calls PGM4(cobol-db2).
1Q. PGM3 is modified, which is purely COBOL progam. Do we compile only PGM3 and promote it to production or should we do a BIND again as its being called by and calls cobol-db2 program.
2Q. If PGM4 is modifieed, then what has to be done. (I'm using PACKAGE -> PLAN concept) ?
Also, can any one please explain me bind with package concept when we have cobol-cobol/db2 call.

Ashok,
Its definitely a question of how you making calls.A call can be static and dynamic.
With Dynamic call you do not need to compile main program is sub program changes.
But with Static call you need to compile Main program too.
Ans1 :- Static call in all calls - yes you must compile all programs.
Dynamic call used - just compile sub program.
Ans 2 :- See full details below for package and plan concept.
If you bound the old versions of DBRMs directly into your plan,
· Identify all the DBRMs that are bound directly into that plan for both the changed programs and any unchanged programs, and bind them all into the plan again.
·While you are binding the DBRMs into the plan, applications cannot use the plan to access DB2.
If you bound the old versions of the DBRMs for the changed application programs into packages
·You do not need to bind any other packages or directly-bound DBRMs into the plan again.
·You simply bind the new versions of the DBRMs for the changed application programs into packages with the same names as the old versions.
·You do not need to bind the plan again--it locates the new versions of the packages.
·While you are changing the packages, application programs can still use the other packages and directly-bound DBRMs in the plan.
Hope this helps!!.

As a rule of thumb, if the "consistency token" changes, you should rebind. That is say, if a new DBRM is produced. Draw a picture. It will help. Linking is really a red herring here. If you don't know what a consistency token is you will after your -805. Ask a peer for help (in the first instance).
Also ask you peers about impact analysis. (What else am I not recompiling that I should ?).

If the subroutine contains static SQL statements then it will produce a DBRM when compiled. This changes the consistency token and thus requires the module to be rebound to the database to avoid an 818 consistency token error. If the subroutine contains no SQL then it does not ever need to be bound to the database because no DBRM is ever created for it.
Even a program that contains only dynamic SQL will still create a DBRM that must be bound to the database. The DBRM itself will be pretty much empty apart from the consistency token.
This holds true regardless of whether this is mainframe COBOL or distributed COBOL using DB2 or LUW.

It's been a while since I had to write any COBOL but we always had two relevant rule of thumbs.
Only use Static Calls - your code should be performance tested and if there is no need for a dynamic call for a very specific purpose avoid it at all costs.
Rebind everything when something is changed and check the access paths created PRIOR to putting it live
If you need to wait for a period of outage to complete the task and flip in the updated code in production I would be patient and plan one in and complete the bind then...or get a DBA to do it and get them to confirm it was successful in your outage window or roll back immediately.
If your development environment is sufficiently sophisticated complete the bind in a lower pre-production environment using the stats for the DB2 tables from production (Copy the data in if you can - or get a DBA to do it). And check that none of the access paths for any of the DB2 calls have changed.
Hope this helps

First use this DB2 SQL to get the CONTOKENs
SELECT SUBSTR(COLLID,1,12) AS COLLID ,
SUBSTR(NAME,1,8) AS NAME ,
HEX(CONTOKEN) AS CONTOKEN ,
SUBSTR(OWNER,1,8) AS OWNER ,
SUBSTR(CREATOR,1,8) AS CREATOR ,
PDSNAME ,BINDTIME
FROM SYSIBM.SYSPACKAGE
WHERE NAME= 'program name';
Get the DB2 CONTOKEN (example below)
1ADB70E30768F694
0768F6941ADB70E3 (then reversed contoken 4bytes+4bytes)
Check #1 use reversed token search
Use token 0768F6941ADB70E3 (reversed)
CONTROL.???????.CICSLIB
Should be found
Check #2 use non-reversed token into DBRMLIB
CONTROL.????????.CIC.DBRMLIB
-Use token 1ADB70E30768F694
Should be found
If found then your bind is good.

Related

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

DB2 external stored procedure fails with CPF9810 when called from client

In a green screen session, caling a program MYLIB/TESTPRG works when my library list is set to QGPL, QTEMP, VENDRLIB1, VENDRLIB2, VENDRLIB3. I can execute call MYLIB/TESTPRG on a green screen command line.
I want to be able to run this command from my Windows client. I created an external stored procedure MYLIB/TESTPROC with external name MYLIB/TESTPRG as I have seen in various articles. My original question stated that I could execute this procedure successfully in STRSQL in a green screen session with my library list as above, but that is false. It does not work. It simply says 'Trigger program or external routine detected an error.' Sorry for the wrong information.
When MYLIB/TESTPROC is called from the client (CALL MYLIB/TESTPROC), it fails with CPF9810 (Library &1 not found). I connected to the database via i Navigator -> Run SQL Scripts. In Connection -> JDBC Settings I had Default SQL schema = 'Use library list of server job' and set Schema list=QGPL,QTEMP,VENDRLIB1,VENDRLIB2,VENDRLIB3. I then executed CALL MYLIB/TESTPROC and got the message as stated above.
What works is when I run the program, i.e. CALL MYLIB/TESTPRG on a green screen command line.
TESTPRG is a C program that takes no arguments. The stored procedure was defined like this:
CREATE PROCEDURE MYLIB/TESTPROC
LANGUAGE C
SPECIFIC MYLIB/TESTPROC
NOT DETERMINISTIC
NO SQL
CALLED ON NULL INPUT
EXTERNAL NAME 'MYLIB/TESTPRG'
PARAMETER STYLE GENERAL ;
CPF9810 - Library &1 not found means that something is trying to access Library &1 (whatever that is, you didn't tell us) and the library as typed is not on the system anywhere. &1 is not the name of the library, it is a substitution variable which will display the library name in the job log. Look at the real library spelling in the job log. Check your spelling. Check the connection to make sure all the libraries are specified correctly. The message will tell you exactly which library is causing the problem.
If indeed the program works in green screen when the library list is set properly, then I would expect the problem to be in your connection where it is trying to set a library list. You cannot add a non-existent library to the library list. That is why it works in green screen, your library is necessarily typed correctly there, or it wouldn't be in the library list. You would get a similar error (same text, different error code) if you tried to add library with a spelling error to the library list in green screen.
Figure out the full text of the message (look in the job log), and you will see just what is throwing the error, and what the library is. Hint, it is not likely SQL throwing the error as those errors all look like SQL#### or SQ#####. More likely a CL command or it's processing program being called by an IBM server that is sending a CPF message.
As you have discovered, you can directly call simple programs without defining an external SQL procedure based on this documentation from IBM:
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/db2/rbafzcallsta.htm
I believe the recommendation to create your own external procedure definition for simple programs is primarily to reduce ambiguity. If you have programs and procedures that happen to have matching names, you need to know the rules list to figure out which one is being called for instance.
Also, the rules for external functions are different than external stored procedures and those get confused as well.
Per my comment, I usually make my procedure calls with the library within the call command.
In a terminal session using CALL PGM(MYLIB/TESTPROC). Or in a SQL session using CALL MYLIB.TESTPROC.
This can prevent someone from inadvertently putting your procedure in a personal library or the like. I usually do not specify a session library list on my SQL clients, accepting the system library list.
I had promised to accept Douglas Korinke's comment as an answer. However, I was experimenting a lot and I am no longer sure of what I knew and when I knew it. My problem had something to do with parameter passing to the C program. If I can reproduce it with a simple case I will ask another question.
In a Java program it is possible to set the libraries by using the following method :
ds.setLibraries("list of libraries");
Example :
ds.setServerName("server1");
ds.setPortNumber(1780);
ds.setDatabaseName("DBTEST");
ds.setLibraries("*LIBL,DAT452BS,DAT452BP");

More than one V4L-DVB driver on the same host machine

I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave

How to replace a shared file when deploying code with Capistrano?

Update: TL;DR there seems to be no built-in way to achieve this, so a custom task is an easy solution.
Capistrano provides facilities to share files and directories over all releases. This is convenient and provides even some safety on files that should not be easily changed (or must remain the same across releases), e.g. a database configuration file.
But when it comes to replace or just update one of these shared files, I end up doing it manually, directly on the target machine. I would like to improve on that, for instance by asking Capistrano to overwrite some or all shared files when deploying. A kind of --force flag with some granularity.
I am not aware of any such kind of facility, and failing so far in my search. Any pointer?
Thinking about it
One of the reason why this facility does not exist (except that I did not find it!) is that it may be harder than it looks. For example, let's assume we have a shared database configuration file, and we exclude it from version control for security reason (common practice). Current release relies on version 1 of the DB configuration. The next release requires version 2 of the DB configuration. If the deployment goes well, everything's good. It gets harder when rolling back after some error with the new release (e.g. a regression), as version 1 must then be available.
Such automation would be cool and convenient, but dangerous as well. Yet I have practical use cases at hand.
I created a template method to do this. For example, I could have a task like this:
task :create_database_yml do
on roles(:app, :db) do
within(shared_path) do
template "local/path/to/database.yml.erb",
"config/database.yml",
:mode => "600"
end
end
end
And then I have a database.yml.erb template that uses things like fetch(:database_password) to fill in appropriate values. You can use the ask method in Capistrano to prompt for these values so they are never committed.
The implementation of template can be very simple: you just need to read the file, pass it through ERB, and then use Capistrano's upload! to place the results on the server.
My version is a little more complicated than yours probably needs to be, but in case you are curious:
https://github.com/mattbrictson/capistrano-mb/blob/7600440ecd3331945d03e059368b75849857f1fb/lib/capistrano/mb/dsl.rb#L104
One approach is to use a system configuration tool like Chef or Puppet to deploy the configuration files distinctly from Capistrano.
Another approach is to create a custom task to do this: https://coderwall.com/p/wgs6gw/copy-local-files-to-remote-server-using-capistrano-3
I personally don't change on-server configs often enough or on enough servers yet to have tried to automate it. Crafting an scp command which copies the desired config file to all of the required servers has sufficed in the past.

How to handle environment-specific application configuration organization-wide?

Problem
Your organization has many separate applications, some of which interact with each other (to form "systems"). You need to deploy these applications to separate environments to facilitate staged testing (for example, DEV, QA, UAT, PROD). A given application needs to be configured slightly differently in each environment (each environment has a separate database, for example). You want this re-configuration to be handled by some sort of automated mechanism so that your release managers don't have to manually configure each application every time it is deployed to a different environment.
Desired Features
I would like to design an organization-wide configuration solution with the following properties (ideally):
Supports "one click" deployments (only the environment needs to be specified, and no manual re-configuration during/after deployment should be necessary).
There should be a single "system of record" where a shared environment-dependent property is specified (such as a database connection string that is shared by many applications).
Supports re-configuration of deployed applications (in the event that an environment-specific property needs to change), ideally without requiring a re-deployment of the application.
Allows an application to be run on the same machine, but in different environments (run a PROD instance and a DEV instance simultaneously).
Possible Solutions
I see two basic directions in which a solution could go:
Make all applications "environment aware". You would pass the environment name (DEV, QA, etc) at the command line to the app, and then the app is "smart" enough to figure out the environment-specific configuration values at run-time. The app could fetch the values from flat files deployed along with the app, or from a central configuration service.
Applications are not "smart" as they are in #1, and simply fetch configuration by property name from config files deployed with the app. The values of these properties are injected into the config files at deploy-time by the install program/script. That install script takes the environment name and fetches all relevant configuration values from a central configuration service.
Question
How would/have you achieved a configuration solution that solves these problems and supports these desired features? Am I on target with the two possible solutions? Do you have a preference between those solutions? Also, please feel free to tell me that I'm thinking about the problem all wrong. Any feedback would be greatly appreciated.
We've all run into these kinds of things, particularly in large organizations. I think it's most important to manage your own expectations first, and also ask whether it's really necessary to tell every system and subsystem on a given box to "change to DEV mode" or "change to PROD mode". My personal recommendation is as follows:
Make individual boxes responsible for a different stage - i.e. "this is a DEV box", and "this is a PROD box".
Collect as much of the configuration that differs from box to box in one location, even if it requires soft links or scripts that collect the information to then print out.
A. This way, you can easily "dump this box's configuration" in two places and see what differs, for example after a new deployment.
B. You can also make configuration changes separate from software changes, at least to some degree, which is a good way to root out bugs that happen at release time.
Then have everything base its configuration on something/somewhere that is not baked-in or hard-coded - just make sure to collect and document it in that one location. It almost doesn't matter what the mechanism is, which is a good thing, because some systems just don't want to be forced to use some mechanisms or others.
Sorry if this is too general an answer - the question was very general. I've worked in several large software-based organizations before, and this seemed to be the best approach. Using a standalone server as "one unit of deployment" is the most realistic scenario (though sometimes its expensive), since applications affect each other, and no matter how careful you are, you destabilize a whole system when you move any given gear or cog.
The alternative gets very complex very quickly. You need to start rewriting the applications that you have control over in order to have them accept a "DEV" switch, and you end up adding layers of kludge to the ones you don't have control over. Usually, the ones you don't have control over at least base their properties on something defined on a system-wide level, unless they are "calling the mothership for instructions".
It's easier to redirect people to a remote location and have them "use DEV" vs "use PROD" than it is to "make this machine run like DEV" vs "make this machine run like PROD". And if you're mixing things up, like having a DEV task run together on the same box as a PROD task, then that's not a realistic scenario anyways: I guarantee that eventually you will be granting illegal DEV-only access to somebody on PROD, and you'll have a DEV task wipe out a PROD database.
Hope this helps. Let me know if you'd like to discuss more specifics involved.
I personally prefer solution 2 (the app should know itself, by its configuration, what environment it is running in). With solution 1 (pass the environment name as a startup parameter) the danger of using the wrong environment specifier is much too high. Accessing the TEST database from PROD code and vice versa may cause mayhem, if the two installed code bases are not of the same version, as is often the case.
My current project uses solution 1, but I don't like that. A previous project I worked on used a variation of solution 2: The build process generated one setup file for every environment, making sure that they contained the same code base but appropriate configuration paramters. That worked like a charm, but I know it contradicts the paradigm that the "exact same build files must be deployed everywhere".
I think I have asked a related, self-answered, question, before I read this one : How to organize code so that we can move and update it without having to edit the location of the configuration file? . So, on that basis, I provide an answer here. I don't like the idea of "smart" application (solution 1 here) for such a simple task as finding environment settings. It seems a complicated framework for something that should be simple. The idea of an install script (solution 2 here) is powerful, but it is useful to allow the user to change the content of the config file, but would it allow to change the location of this config file? What is this "central configuration service", where is it located? My answer is that I would go with option 2, if the goal is to set the content of the configuration file, but I feel that the issue of the location of this configuration file remains unanswered here.
If you're using JSON to store/transmit configuration (or can use JSON in your pre-deploy process to output to some other format) you can annotate key/property names for environment/context-specific values with arbitrary or environment-specific suffixes, and then dynamically prefer/discriminate them at build/deploy/run/render -time, while leaving un-annotated properties alone.
We have used this to avoid duplicating entire configuration files (with the associated problems well known) AND to reduce repetition. The technique is also perfect for internationalization (i18n) -- even within the same file, if desired.
Example, snippet of pre-processed JSON config:
var config = {
'ver': '1.0',
'help': {
'BLURB': 'This pre-production environment is not supported. Contact Development Team with questions.',
'PHONE': '808-867-5309',
'EMAIL': 'coder.jen#lostnumber.com'
},
'help#www.productionwebsite.com': {
'BLURB': 'Please contact Customer Service Center',
'BLURB#fr': 'S\'il vous plaît communiquer avec notre Centre de service à la clientèle',
'BLURB#de': 'Bitte kontaktieren Sie unseren Kundendienst!!1!',
'PHONE': '1-800-CUS-TOMR',
'EMAIL': 'customer.service#productionwebsite.com'
},
}
... and post-processed (in this case, at render time) given dynamic, browser-environment-known location.hostname='www.productionwebsite.com' and navigator.language of 'de'):
prefer(config,['www.productionwebsite.com','de']); // prefer(obj,string|Array<string>)
JSON.stringify(config); // {
'ver': '1.0',
'help': {
'BLURB': 'Bitte kontaktieren Sie unseren Kundendienst!!1!',
'PHONE': '1-800-CUS-TOMR',
'EMAIL': 'customer.service#productionwebsite.com'
}
}
If a non-annotated ('base') property has no competing annotated property, it is left alone (presumably global across environments) otherwise its value is replaced by an annotated value, if the suffix matches one of the inputs to the preference/discrimination function. Annotated properties that do not match are dropped entirely.
You can mix and match this behaviour to annotate configuration to achieve distinctions of global, default, specific that are (assuming you're sensible) readable with zero/minimal duplication.
The single, recursive prefer() function (as we're calling it, lacking the need or desire to make an entire project/framework out of it) we've developed so far (see jsFiddle, with inline docs) goes a bit further than this simple example, and (explained in greater detail here) handles deeply-nested configuration objects, as well as preferential ordering and (if you need to stay flat) combination of suffixes.
The function relies on JS ability to reference object properties as strings, dynamically, and tolerate # and & delimiters in property names which are not valid in dot-notation syntax but consequently (help) prevent developers from breaking this technique by accidentally referring to pre-processed/annotated attributes in code (unless they, non-conventionally don't prefer to use dot-notation.)
We have yet to have this break anything for us, nor have we been schooled on any fundamental flaws of this technique, beyond irresponsible/unintended usage or investment/fondness for existing frameworks/techniques that pre-exist. We have also not profiled it for performance (we only tend to run this once per build/session, etc.) so in your own usage, YMMV.
Most configurations transmitted client-side of course would not want to contain sensitive pre-production values, so one could (should!) use the same function to generate a production-only version (with no annotations) in pre-deploy, while still enjoying a SINGLE configuration file upstream in your process.
Further, if you're doing this for i18n, you may not want the entire wad going over the wire, so could process it server-side (cached or live, etc.) or pre-process it in build/deploy by splitting into separate files, but STILL enjoying a single source of truth as early in your workflow as possible.
We have not explored implementing the same function in Java (or C#, PERL, etc.) assuming it's even possible (with some exotic reflection maybe?) but a build environment that includes NodeJS could farm that step out easily.
Well if it suits your needs and you have no problem of storing the connection strings in the source control repository, you could create files like:
appsettings.dev.json
appsettings.qa.json
appsettings.staging.json
And choose the right one in the deployment script and rename it to the actual appsettings.json, which is then read by your app.