What's the difference between SYSOUT and SYSPRINT of a job? - jcl

While coding a JCL, we give SYSOUT and SYSPRINT DDs. Which type of output goes to SYSOUT and what else to SYSPRINT?

SYSOUT is always allocated and gets among other things all the output from the System level process (including any messages about the JCL itself, performance stats, error messages etc.)
SYSPRINT is just another DD which, by convention, is used by utility programs for thier output.

sysout : To print the output of the program in spool,it is a system defined program.
sysprint : To print messages of the program execution, and ii contains compile source listing and line no, offset no.

Historically, IBM utility programs used SYSOUT for status messages, and used SYSPRINT for the utility program reports.
In COBOL programs, the output of DISPLAY statements goes to SYSOUT.
JCL related messages from a JES system are written to JESMSG. (Not sure of the spelling. I'm at home now, not at work.)

SYSOUT system defined dd name used for file status codes and system abend codes information and output of the display statement can be viewd
and sysout parameter used to direct the output device and genarete during execution of the job to an output device
SYSPRINT contains the compiled source listing and for each line in the soucrce listing a line number and offset no can be genarated

Related

Is redirected stdout/stderr buffered in install4j?

I'm using install4j to generate Windows executables.
The launcher is configured to redirect stderr and stdout to log\error.log resp. log\output.log.
This all works as intended, the log files are written in the expected location and with the expected content.
However, I do not know whether output is flushed or buffered.
I.e. if I kill the program via the Task Manager, can I expect to see the last line that was printed to stderr, or can I expect to lose some output?
(Both outcomes would be fine, I just need to know what will happen so I known how to interpret the log files I'm getting, and what to ask of customers to make sure that I get complete logs.)
The redirection files are flushed for every newline, but not for every character.

Running MATLAB system command in background with stdout

I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?

Mistake when use abaqus subroutine to read file with multiple processors(cpus)

I got a mistake when I use abaqus subroutine to read file with multiple processors(cpus),could you help me to deal with this mistake.thanks a lot
I want to read variables from a file ,when one cpu is used,everything is ok,
but when more than one cpus are used,there will be a mistake,it seems that every cpu repeat the same command.
for example,the following is the contents of the file to read from,file name is data.dat
*matID ,2,1
131000.000, 8880.000, 8180.000
0.324, 0.324, 0.300
3990.000, 5320.000, 5320.000
1871.000, 59.700, 59.700
1291.000, 215.000, 215.000
90.000, 102.000, 102.000
my subroutine is shown as follow:
character*12 check1
integer check2,error
OPEN(10,file='data.dat',status='old',iostat=error)
if (error.EQ.0) then
read(10,*,iostat=error) check1,Nm
end if
close(10)
print *,'Nm=',nm,error
print *,'**'
when I use 2 cpus,the printed results will be :
Nm= 2 0
Nm= 8880 0
**
**
Depending on the reason for reading in data from a file, there are a couple of ways to avoid this problem:
If you only need to access the data once:
Read in the data in a subroutine that is always called in serial. UEXTERNALDB is a good example and can be used so that the file open only happens at the beginning of an analysis or the beginning of an increment as needed. You can then carefully store the information in common blocks. Reading from a common block in parallel should work fine, but do not write to them from the parallel subroutines.
Another way to get in a smaller amount of data is to define solution variables in your input file instead.
If you really need to open this file locally within each parallel thread (can't see why but open to correction), you can use GETNUMCPUS and GETRANK to open different copies of the files within each thread. GETRANK returns an integer giving you the rank/id of the process. I would advise against this method though. If your problem is large enough to warrant using parallel, then you should avoid slowing it down with file reads.
For more info see sections 1.1.31 and 2.1.4 of the Abaqus 6.14 docs.

printing text into a file in Matlab

I want to log the running of my program, specifically the running time of each part. At this moment I print to the screen using disp. Is there a way so that some of the things I print would also be printed into a text file?
You can use the DIARY command, that captures everything from the command window.
There are other solutions to this problem where you write to one or more logfiles opened when your program is running. This provides a permanent record without polluting your work space or diary. It also works well if you compile your MATLAB application.
Jan Simon has a nice solution at MATLAB Central which uses a persistant file id so the log to file mechanism can be used again and again throughout an application with many functions without passing the file id about.
Others at MATLAB Central (here and here) have developed class based solutions with more features.
Also, fprintf.

How can I make log4perl output easier to read?

When using log4perl, the debug log layout that I'm using is :
log4perl.appender.D10.layout=PatternLayout
log4perl.appender.D10.layout.ConversionPattern=%d [pid=%P] %p %F{1} (%L) %M %m%n
log4perl.appender.D10.Filter = DebugAndUp
This produces very verbose debug logs, for example:
2008/11/26 11:57:28 [pid=25485] DEBUG SomeModule.pm (331) functions::SomeModule::Test Test XXX was successfull
2008/11/26 11:57:29 [pid=25485] ERROR SomeOtherUnrelatedModule.pm (99999) functions::SomeModule::AnotherTest AnotherTest YYY has faled
This works great, and provides excellent debugging data.
However, each line of the debug log contains different function names, pid length, etc. This makes each line layout differently, and makes reading debug logs much harder than it needs to be.
Is there a way in log4perl to format the line so that the debugging metadata (everything up until the actual log message) be padded at the end with spaces/tabs, and have the actual message start at the same column of text?
You can pad the single fields that make up your entries. For example [pid=%5P] will always give you at least 5 characters for the PID.
The "Quantify Placeholders" section in the docs for Log::Log4perl::Layout gives more details.
There are a couple of ways to go with this, although you have to figure out which one works better for your situation:
Use a different appender if you are working live. Have that appender use a pattern that shows only the information you want. If you're working in a single process, for instance, your alternate appender might leave off the PID and the timestamp. You might only need the file name and line number.
Use %n to put newlines in the right place. That makes it multi-line output that is slightly harder to parse later, but you can choose another sequence for the input record separator (say, a literal "[EOL]") to make it easy to read entry-by-entry.
Log to a database instead of a file. For your reports, select just the columns you want to inspect.
Log everything, but write a filter to go through the log file ad-hoc to display just the parts that you want to see, such as only the debugging messages, the entries between certain times, only the entries involving a file, and so on.