Multi line replace - jcl

I'm trying to perform a search-replace in several JCLs but I need multi-line capabilities, I need to replace a line for several.
Example:
//STEP1 EXEC PGM=DUMY,PARAM=XPTO
transform into
//STEP1 EXEC PGM=WORKS,PARAM=THAT
//SOMEDD DSN=DSN.WITH.SOMETHING
//SYSTIN
SOME MORE PARAMETERS
I looked into file-aid batch processing but it seems to only support STRING replacement without multi-line support.
I thing REXX might do it but I have no knowledge in it.
Any ideas?

There are commercial products that understand JCL syntax and can do this sort of thing. JOB/SCAN is one, I'm sure others in this product space can do it too.
Which is of no help if you don't have such a product, so we're back to your Rexx comment. Yes, you can do this with Rexx, but you're going to be parsing JCL. This can be non-trivial depending on your requirements. Rexx doesn't have regular expression matching, which is what one normally uses when parsing. It can be done, and if you aren't seeking to do anything much more complicated that what you've indicated then it's probably not too difficult for a Rexx programmer - perhaps this is an opportunity to become one. Rexx had, as one of its design goals, to make programming easier.
An alternative would be to use Perl, copying the PDS members to the Unix file system so you can process them, then copying them back when you're done. Presuming you're running a relatively current release of z/OS and your Systems Programmer(s) have installed the z/OS port of Perl, which is a no-cost item.
If you're willing to copy the affected members to the Unix file system, you may be able to do this with awk. I've only dabbled with awk, but it has the advantage of just being there by default, no one would have to install anything (Perl) that isn't already there by default.

Here are the possibilities that I have in my mind:
You can write a simple COBOL program which would search for the required STRING and replace with whatever you want/need to add.
You can also write a REXX EXEC to perform this, which may not need to parse the line of code that gets read. Simple IF condition would do, i suppose.
But here are some challenges you would have and of course are avoidable.
What if some other parameters exist along with what you search for? like
//STEP1 EXEC PGM=DUMY,PARAM=XPTO,PARM1='X'
What if the search string is spanned across more than one line? like
//STEP1 EXEC PGM=DUMY,
// PARAM=XPTO

Here is a simple TSO/ISPF edit macro that will implement your example. Of course this is very crude but serves as an example of how JCL might be edited.
ISREDIT MACRO ()
CONTROL NOFLUSH NOPROMPT LIST CONLIST SYMLIST MSG
ISREDIT CHANGE ' PGM=DUMY' ' PGM=WORKS'
ISREDIT CHANGE 'XPTO' 'THAT'
ISREDIT (ROW1,COL1) = CURSOR
ISREDIT LINE_AFTER &ROW1 = "//SOMEDD DD DSN=DSN.WITH.SOMTHING,DISP=SHR"
SET &ROW1 = &ROW1 + 1
ISREDIT LINE_AFTER &ROW1 = "//SYSTSIN DD *"
SET &ROW1 = &ROW1 + 1
ISREDIT LINE_AFTER &ROW1 = "SOME MORE PARAMETERS"
EXIT CODE(0)

Related

Invoking Rexx from JCL

To invoke a Rexx program and pass parameters, IKJEFT01 can be used
// SET PARM1=
// SET PARM2=
//AUDITDS EXEC PGM=IKJEFT01,
// PARM='RXPGM &PARM1 &PARM2'
But PARM supports limited number of characters. Is there any way to invoke a REXX using a JCL and pass parameter containing more characters?
Using SYSTSIN would be a solution, but I want to use symbolic parameters as that of in PARM parameter.
For historic reasons, the PARM field is limited to 100 bytes, however this limit is increased to 32K for LE (Language Environment) enabled applications that are willing to call the CEE3PR2 LE callable service. LE languages would be Assembler (certain caveats apply), and modern versions of COBOL and PL/I. As far as I know, Rexx is not an LE-enabled language.
One place I worked had a generic program that would write whatever was passed in PARM value to a flat file. Ours happened to be Assembler, but it could have been COBOL, PL/I, or Rexx.
See this answer for an example of how it was used.
I suggest you create such a program, if your shop does not already have one (and please do check before writing your own). Syncsort (and perhaps DFSORT) have the capability to write a parm to an output file, so you could also go that route.
Presupposing the capability of writing a parm to a flat file, you could invoke it once for each of your parameters, MODding the result to a flat file. Then read the flat file into your Rexx program, each record representing one of your parameters.
Update: As #BillWoodger points out in a comment, the PARMDD DD can be used...
Use PARMDD specifying the ddname of a data set containing the command
parmstring to be executed if the command parmstring is more than 100
characters in length.
...which obviates the need to read in the parameters one record at a time.
Also, apparently as of z/OS 2.1 you no longer need a program to place your parms into a dataset, you can have them resolved in-stream when the JCL is processed.
There are two methods of invoking a REXX script using IKJEFT01. One is to use parm as you are currently doing, the other is to use the SYSTSIN data set. That's my preferred method and you can continue long parameters to the next line using the + continuation character. For example, below is an example of calling BPXBATCH using a long parameter zFS file name with continuation.
//FORWARD EXEC PGM=IKJEFT01,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//STDOUT DD SYSOUT=*
//STDERR DD SYSOUT=*
//SYSTSIN DD *
BPXBATCH SH sftp -b /u/bigdata/doc/hadoop.sftp -oPort=8022 biadmin#biad+
min
There are 2 options
if you want the REXX to be able to execute TSO commands, use IKJEFT01
you cannot use it as an external cobol/pl1 programs
you can use PGM=IRXJCL to execute REXX program.
under IRXJCL you cannot activate 'ADDRESS TSO" and usewr TSO commands.
BuT you can call it from other high languages.
The problem is thet you cannot return an answer from the REXX to the calling program.
An another problem is that you can call the REXX with only one string parameter.
As an solution to this problem, I called rexx from cobol. and part of the parameter was an address. in REXX I use STORAGE function to put the output to the address

Prolog read input without full stop

I am currently programming a small text-based adventure game in SWI-Prolog. Hence, the user will have to give commands like "goto(room)" or "goto room".
However the problem is that you always have to finish the command with a full stop, i.e.
"goto(room)." instead of "goto(room). This is not very user-friendly.
I have a predicate that reads a command and then executes the input. How can I automatically add the full stop if there is none (if there already is one the input should just be executed)?
Thanks in advance!
Regards,
Volker
Obviously you are using read/1 or some variation; this is supposed to be used to read valid prolog terms (and that's why you need a full-stop).
The solution would be to parse the input on your own (check primitive char io, read utilities and io in general (you will probably need just the read utilities though)) and then convert it to a term.
Additionally, you can create a small natural language with DCGs and use; for example the user could just write goto room instead of goto(room).
On the other hand, I personally don't think that having to skip a full-stop it will be a lot more user friendly if they have to type prolog terms anyway.

Removing comments using Perl

Something I keep doing is removing comments from a file as I process it. I was was wondering if there a module to do this.
Sort of code I keep writing time and again is
while(<>) {
s/#.*// ;
next if /^ \s+ $/x ;
**** do something useful here ****
}
Edit Just to clarify, the input is not Perl. It is a text file of my own making that might have data I want to process in some way or other. I want to beable to place comments that are ignored by my programs
Unless this is a learning experience I suggest you use Regexp::Common::comment instead of writing your own regular expressions.
It supports quite a few languages.
The question does not make clear what type of file it is. Are we dealing with perl source files? If so, your approach is not entirely correct - see gbacon's comment. Perl source files are notoriously difficult (impossible?) to parse with regex. In that case, or if you need to deal with several types of files, use Regexp::Common::comment as suggested by Niffle. Otherwise, if you think your regex logic is correct for your scenario, then I personally prefer to write it explicitly, it's just a pair of strighforward lines, there is little to be gained by using a module (and you introduce a dependency).

Why are Perl source filters bad and when is it OK to use them?

It is "common knowledge" that source filters are bad and should not be used in production code.
When answering a a similar, but more specific question I couldn't find any good references that explain clearly why filters are bad and when they can be safely used. I think now is time to create one.
Why are source filters bad?
When is it OK to use a source filter?
Why source filters are bad:
Nothing but perl can parse Perl. (Source filters are fragile.)
When a source filter breaks pretty much anything can happen. (They can introduce subtle and very hard to find bugs.)
Source filters can break tools that work with source code. (PPI, refactoring, static analysis, etc.)
Source filters are mutually exclusive. (You can't use more than one at a time -- unless you're psychotic).
When they're okay:
You're experimenting.
You're writing throw-away code.
Your name is Damian and you must be allowed to program in latin.
You're programming in Perl 6.
Only perl can parse Perl (see this example):
#result = (dothis $foo, $bar);
# Which of the following is it equivalent to?
#result = (dothis($foo), $bar);
#result = dothis($foo, $bar);
This kind of ambiguity makes it very hard to write source filters that always succeed and do the right thing. When things go wrong, debugging is awkward.
After crashing and burning a few times, I have developed the superstitious approach of never trying to write another source filter.
I do occasionally use Smart::Comments for debugging, though. When I do, I load the module on the command line:
$ perl -MSmart::Comments test.pl
so as to avoid any chance that it might remain enabled in production code.
See also: Perl Cannot Be Parsed: A Formal Proof
I don't like source filters because you can't tell what code is going to do just by reading it. Additionally, things that look like they aren't executable, such as comments, might magically be executable with the filter. You (or more likely your coworkers) could delete what you think isn't important and break things.
Having said that, if you are implementing your own little language that you want to turn into Perl, source filters might be the right tool. However, just don't call it Perl. :)
It's worth mentioning that Devel::Declare keywords (and starting with Perl 5.11.2, pluggable keywords) aren't source filters, and don't run afoul of the "only perl can parse Perl" problem. This is because they're run by the perl parser itself, they take what they need from the input, and then they return control to the very same parser.
For example, when you declare a method in MooseX::Declare like this:
method frob ($bubble, $bobble does coerce) {
... # complicated code
}
The word "method" invokes the method keyword parser, which uses its own grammar to get the method name and parse the method signature (which isn't Perl, but it doesn't need to be -- it just needs to be well-defined). Then it leaves perl to parse the method body as the body of a sub. Anything anywhere in your code that isn't between the word "method" and the end of a method signature doesn't get seen by the method parser at all, so it can't break your code, no matter how tricky you get.
The problem I see is the same problem you encounter with any C/C++ macro more complex than defining a constant: It degrades your ability to understand what the code is doing by looking at it, because you're not looking at the code that actually executes.
In theory, a source filter is no more dangerous than any other module, since you could easily write a module that redefines builtins or other constructs in "unexpected" ways. In practice however, it is quite hard to write a source filter in a way where you can prove that its not going to make a mistake. I tried my hand at writing a source filter that implements the perl6 feed operators in perl5 (Perl6::Feeds on cpan). You can take a look at the regular expressions to see the acrobatics required to simply figure out the boundaries of expression scope. While the filter works, and provides a test bed to experiment with feeds, I wouldn't consider using it in a production environment without many many more hours of testing.
Filter::Simple certainly comes in handy by dealing with 'the gory details of parsing quoted constructs', so I would be wary of any source filter that doesn't start there.
In all, it really depends on the filter you are using, and how broad a scope it tries to match against. If it is something simple like a c macro, then its "probably" ok, but if its something complicated then its a judgement call. I personally can't wait to play around with perl6's macro system. Finally lisp wont have anything on perl :-)
There is a nice example here that shows in what trouble you can get with source filters.
http://shadow.cat/blog/matt-s-trout/show-us-the-whole-code/
They used a module called Switch, which is based on source filters. And because of that, they were unable to find the source of an error message for days.

What is a good method for inventing a command name?

We're struggling to come up with a command name for our all purpose "developer helper" tool, which we are using on our project. It's like a wrapper for our existing tools like cmake and hg. The purpose of the command is really just to make our lives easier by combining multiple commands into one (for example, publishing packages). For example, we have commands like:
do conf
do build
do install
do publish
We've considered a few ambiguous names like do (as above) and run, but obviously, do is a Linux bash command and run is pretty ambiguous.
We'd like our command to be 2 chars short, preferably - but who thinks we're asking the impossible? Is there a practical way to check the availability of command names (other than just typing them into your terminal), or is it just a case of choose one and hope nobody else will use it? Are we worrying about nothing?
Since it's a "developer helper" tool why not use hm [run|build|port|deploy|test], Help Me ...
Give it a verbose name, then let everyone alias it to whatever they want. Make sure you use the verbose name in other scripts so that it removes ambiguity.
This way, each user gets to use whatever makes sense to him/her, and the scripts are more readable and more easily searchable (for example, grepping four "our_cool_tool" will usually yield better results than grepping for "run").
How many 2-character words are useful in this context? I think you need four. With that in mind, here are some suggestions.
omni
torq
fluf
mega
spif
crnk
splt
argh
quat
drul
scud
prun
sqat
zoom
sizl
I have more if you need them.
Pick one: http://en.wikipedia.org/wiki/List_of_all_two-letter_combinations
To check the availability of command names, I suggest looking for all two-letter filenames that are in the directories in your path. You can use a script like this
for item in `echo $PATH | sed 's/:/ /g'` ; do
ls -1d $item/??
done
It won't show builtins in your shell (like "do" as you mentioned) but it's a good start.
Change ?? to ??? for three-letter files, etc.
I'm going to vote for qp (quick package?) since it's easy to pronounce, easy to type, and easy to remember where the keys are on the keyboard.
I use "asd". it's short and most developers type it without thinking
(oh, and you can always claim later that it stands for some "Advanced Script for Developers" if you need to justify yourself a few years from now)
How about fu? As in Kung Fu. It's a special purpose tool. And it's really easy to type.
I think that run is a good name, at least anybody that will download your project will know what to do. Calling it without parameters should reveal your options.
Even 'do' will do, I think you can use backquotes to run it from bash scripts.
Also remember that running the tools without parameters will tell you what options you have.
Use makefiles to do everything for you.
How about calling it something descriptive, like 'build_runner', and then just aliasing it to 'br' (or preferred acronym) in your .bashrc?
There is a really crappy tool called cleartool (part of clearcase), and people will alias it on their machine to "ct". Perhaps you can have a longer command and suggest users alias it.
It would probably be best to do something like ire_and_curses suggested, name it descriptively then alias it to a 2 letter command. If I was choosing, I would name it dev_help and alias it to dh.
I think you're worrying about nothing. Install the program as 'the-command-to-do-evertyhing-and-if-you-dont-make-your-own-alias-for-it-you-should'. I don't think that will be too long for any modern filesystems, but you might need to shorten it to 'tctdeaiydmyoafiys'. See what common aliases are used, and then change the program's name to that. In other words: don't decide, let natural selection decide for you. If you are working with a team of < 10, this should not even remotely cause any problems.
Call it devtool alias to dt
Custom tools like that I like to start with the prefix 'jj-'. I can type (with big index-finger power) 'jj ' and see all my personal commands. Also, they group together in alphabetical lists. 'J' is not a very common character for built-inc commands, but you can pick your own.
Since you want two characters, you can use just 'zz', or something starting with 'z'.
Are you sure you want to put all your functionality in one command? That might be simultaneously over-constraining and over-loading the interface a little.
do conf
do build
do install
do publish