IBM DB2 Obscure Error Code - db2

When I try to precompile my COBOL application, using by running SUB on a JCL file, I get this error:
19.30.05 JOB08639 $HASP165 ZUSER13A ENDED AT SVSCJES2 - JCL ERROR CN(INTERNAL)
I've tried looking online with no success. Does anyone know what this is referring to?
Here is my JCL file
000001 //ZUSER13A JOB NOTIFY=&SYSUID
000002 //*--------------------------------------------------------------------*
000003 //* PRECOMP - PRECOMPILE THE COBOL PROGRAM *
000004 //* YOU SHOULD CHANGE ZUSER26 TO YOUR OWN TSO USERID *
000005 //* YOU SHOULD CUSTOMIZE THE FOLLOWING LIBRARIES WITH HELP OF TEACHER *
000006 //*--------------------------------------------------------------------*
000007 //*--------------------------------------------------------------------*
000008 //* THE FOLLOWING 8 SYMBOLIC PARAMETERS SHOULD BE SET BY YOURSELF *
000009 //* ? (1) DB2LOAD - THE DB2 LOAD LIBRARY *
000010 //* ? (2) WSPC - THE SIZE FOR TEMPARARY DATA SET *
000011 //* ? (3) DASD - THE UNIT VALUE FOR DASD *
000012 //* ? (4) SRC - THE COBOL SOURCE PROGRAM LIBRARY *
000013 //* ? (5) CPY - THE COBOL COPYBOOK LIBRARY *
000014 //* ? (6) DBRM - THE DBRM LIBRARY FOR DB2 BIND PROCESS *
000015 //* ? (7) MID - THE MODIFIED COBOL SOURCE CODE LIBRARY *
000016 //* ? (8) TRAN - THE TRANSACTION/FUNCTION MODULE NAME *
000017 //*--------------------------------------------------------------------*
000018 // SET DB2LOAD=ZUSER13.DB2.LOAD
000019 // SET WSPC=500
000020 // SET DASD=SYSDA
000021 // SET SRC=ZUSER13.DB2.SRC
000022 // SET CPY=ZUSER13.DB2.CPY
000023 // SET DBRM=ZUSER13.DB2.DBRM
000024 // SET MID=ZUSER13.DB2.MID
000025 // SET TRAN=OPACCT
000026 //*------------------------------------------------------------------*
000027 //* PRECOMPILE THE COBOL PROGRAM *
000028 //* RETURN CODE SHOULD BE 4 OR LESS *
000029 //*------------------------------------------------------------------*
000030 //PC EXEC PGM=DSNHPC,REGION=4096K,
000031 // PARM=('HOST(IBMCOB)',APOST,APOSTSQL,SOURCE,XREF,'STDSQL(NO)')
000032 //STEPLIB DD DISP=SHR,DSN=&DB2LOAD
000033 //SYSCIN DD DISP=SHR,DSN=&MID(&TRAN)
000034 //SYSPRINT DD SYSOUT=*
000035 //SYSTERM DD SYSOUT=*
000036 //SYSUDUMP DD SYSOUT=*
000037 //SYSUT1 DD SPACE=(800,(&WSPC,&WSPC),,,ROUND),UNIT=&DASD
000038 //SYSUT2 DD SPACE=(800,(&WSPC,&WSPC),,,ROUND),UNIT=&DASD
000039 //SYSIN DD DISP=SHR,DSN=&SRC(&TRAN)
000040 //SYSLIB DD DISP=SHR,DSN=&CPY
000041 //DBRMLIB DD DISP=SHR,DSN=&DBRM(&TRAN)
000042 //

I am wondering if your JOB card is valid. You have:
//ZUSER13A JOB NOTIFY=&SYSUID
the JCL Job card format is:
//jobname JOB (accounting-info),name,keyword-parameters
The jobname is required, you have that: ZUSER13A
The keyword JOB is where it should be. So far so good...
You do not have any accounting-info. Depending on your installation this may or may not be required (it often is). The format for accounting-info is installantion defined so you will have to ask someone about it. Note the parenthesis are optional only if the accounting-info does not contain an imbedded comma or other special characters.
Next there must be a comma if there is anything else specified on the job card. This is not optional and may be the cause of your problem.
Following the comma should be some sort name enclosed in quotes. For example 'PRECOMP'. There may be installation specific rules for this too.
Next there must be another comma if any keyword-parameters are to be included on the job card.
Finally, you may specify keyword parameters such as NOTIFY=. I am unsure whether substitution parameters such as &USERID would be valid here unless the job were submitted under a started task. Since you are using SUB to submit the job (under TSO?) the &USERID may not work for you either. Try hardcoding your user-id.
Often the quickest way to work out what a job card must contain is to look at a piece of JCL that actually did work when submitted under TSO - then copy the job card!

Related

How to use DISP=SHR on an fopen

With code like:
fopen("DD:LOGLIBY(L1234567)", "w");
and JCL like:
//LOGTEST EXEC PGM=LOGTEST
//LOGLIBY DD DSN=MYUSER.LOG.LIBY,DISP=SHR
I can create PDS(E) members while at the same time browsing the PDS(E) to look at existing members, as expected with DISP=SHR.
If instead I code:
fopen("//'MYUSER.LOG.LIBY(L1234567)'", "w");
The fopen fails if I am browsing the PDS(E) at the time, or the browse of the PDS(E) fails while I have the file open. In other words there is no DISP=SHR. According to the fopen() documentation, DISP=SHR is the default when using file modes of "r" etc, but not "w".
How can I provide DISP=SHR in the second example?
There are two possibilities...
For anyone not familiar with the internal structure of partitioned datasets (that is, PDS or PDS/E), these datasets are logically divided into two parts: a "directory" having pointers to all the individual members, and a "data" area containing the actual records for the individual members:
PDS: <DIRECTORY BLOCKS>
<MEMBER1>: ADDRESS OF DATA FOR MEMBER1 (xxx)
<MEMBER2>: ADDRESS OF DATA FOR MEMBER2 (yyy)
...
<DIRECTORY FREESPACE)
...
<EOF - END OF THE PDS DIRECTORY>
<DATA PORTION>
+xxx = DATA FOR MEMBER1
...
<EOF - END OF MEMBER1>
+yyy = DATA FOR MEMBER2
...
<EOF - END OF MEMBER2>
...
FREE SPACE (ALLOCATED, BUT UNUSED)
...
END OF PDS
Throughout the next few paragraphs, keep in mind that you can open either the entire PDS/PDSE, and this enables you to read/write whatever members you like, or you can allocate and open a single member, and that gets processed like any other sequential file.
First, if you actually to have a DD statement coded as you show in the question, then you may simply need to change your open from fopen(dsn,...) to fopen(dd:ddname,...). If you're running under the UNIX Shell or you do something that results in your process running in a different address space (such as fork()), then this might not work, but it may be worth a try. If you do this with the JCL you show, the challenge would be managing the PDS/E directory - you'd need to issue your own "STOW" when you create/update a new member since the JCL allocates the entire dataset, not just a single member. The sequence would be:
Open the DD for output.
Write your data.
Update the PDS or PDS/E directory with new member information (this is where the STOW function comes in - it updates the directory of the PDS/PDSE to reflect the member you created or updated).
Close the file
If you also need to read members, you'd need to issue FIND (or BLDL/POINT - which can be fseek() in C) to point to the correct member, then read the member. I'm sure it sounds like a hassle, but the advantage of this approach is that you can allocate/open the file once, and process as many individual members as you like.
A second workaround might be to dynamically allocate the file yourself, and then open it using DD:ddname syntax...if you only infrequently access the file, this is probably easier to code. The gory details of dynamic allocation are fully described here: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.ieaa800/reqsvc.htm.
There are several ways to invoke dynamic allocation: you can write a small assembler program, you can use the z/OS UNIX Services BPXWDYN callable service, or you can use the C runtime "dynalloc()" or "svc99()" functions. The dynalloc() function is easy to use, but it only exposes a subset of what dynamic allocation can do...svc99() is more cumbersome to use, but it exposes more functionality.
However you do it, dynamic allocation takes "text units" that roughly correspond to the parameters you find on JCL DD statements. What you're describing sounds like you'd just need to pass DSN and DISP text units, and maybe DDNAME (you can either pass your own DDNAME, or let the system generate one for you).
The C runtime functions make all this easy, but be aware that there are a few oddities, such as the need to pad the parameters to their maximum length. For example, a DSN needs to be 44 characters and padded on the right with blanks - not a C-style null-terminated string.
Here's a small code snippet as an example:
#include <dynit.h>
. . .
int allocate(ddn, dsn, mem)
{
__dyn_t ip; // Parameters to dynalloc()
. . .
// Prepare the parameters to dynalloc()
dyninit(&ip); // Initialize the parameters
ip.__ddname = ddn; // 8-char blank-padded
ip.__dsname = dsn; // 44-char blank-padded
ip.__status = __DISP_SHR; // DISP=(SHR)
ip.__normdisp = __DISP_KEEP; // DISP=(...,KEEP)
ip.__misc_flags = __CLOSE; // FREE=CLOSE
if (*mem) // Optional PDS, PDS/E member
ip.__member = mem; // 8-char blank-padded
// Now we can call dynalloc()...
if (dynalloc(&ip)) // 0: Success, else error
{
// On error, the errcode/infocode explain why - values
// are detailed in z/OS Authorized Services Reference
printf("SVC99: Can't allocate %s - RC 0x%x, Info 0x%x\n",
dsn, ip.__errcode, ip.__infocode);
return FALSE;
}
// If dynalloc works, you can open the file with fopen("DD:ddname",...)
}
Don't forget that when you're done with the file, you generally need to deallocate it.
The code snippet above uses "FREE=CLOSE" - this means that when the file is closed, z/OS will automatically free the allocation...if you only open and process the dataset once, this is a convenient approach. If you need to repeatedly open and close the file, then you wouldn't use FREE=CLOSE, but instead call dynamic allocation a second time after you're done with your processing and want to free the file.
If you need to concurrently access multiple files, beware that you'll need to generate multiple unique DDNAMEs. You can either do this in your own code, or you can use the form of dynamic allocation that automatically builds and returns a usable DDNAME (of the form "SYSnnnnn").
Also, don't forget that updating a dataset under DISP=SHR can be dangerous in some situations, especially if the dataset involved can be a conventional PDS as well as a PDS/E. The big danger is that two applications open the dataset for output concurrently...both will write data into the same place, and the result will likely be a damaged PDS directory.
There are some other oddities in the UNIX Services environment, particularly if you use fork() or exec() and expect file handles to work in subprocesses since allocations are generally tied to a particular z/OS address space. Services like spawn() can let the child process run in the same address space, so this is one possibility.

Is there a way to use User Activity Variables to store SQL in Datastage

I am considering using RCP to run a generic datastage job, but the initial SQL changes each time it's called. Is there a process in which I can use a User Activity Variable to inject SQL from a text file or something so I can use the same datastage?
I know this Routine can read a file to look up parameters:
Routine = ‘ReadFile’
vFileName = Arg1
vArray = ”
vCounter = 0
OPENSEQ vFileName to vFileHandle
Else Call DSLogFatal(“Error opening file list: “:vFileName,Routine)
Loop
While READSEQ vLine FROM vFileHandle
vCounter = vCounter + 1
vArray = Fields(vLine,’,’,1)
vArray = Fields(vLine,’,’,2)
vArray = Fields(vLine,’,’,3)
Repeat
CLOSESEQ vFileHandle
Ans = vArray
Return Ans
But does that mean I just store the SQL in one Single line, even if it's long?
Thanks.
Why not just have the SQL within the routine itself and propagate parameters?
I have multiple queries within a single routine that does just that (one for source and one for AfterSQL statement)
This is an example and apologies I'm answering this on my mobile!
InputCol=Trim(pTableName)
If InputCol='Table1' then column='Day'
If InputCol='Table2' then column='Quarter, Day'
SQLCode = ' Select Year, Month, '
SQLCode := column:", Time, "
SQLCode := " to_date(current_timestamp, 'YYYY-MM-DD HH24:MI:SS'), "
SQLCode := \ "This is example text as output" \
SQLCode := "From DATE_TABLE"
crt SQLCode
I've used the multiple encapsulations in the example above, when passing out to a parameter make sure you check the ', " have either been escaped or are displaying correctly
Again, apologies for the quality but I hope it gives you some ideas!
You can give this a try
As you mentioned ,maintain the SQL in a file ( again , if the SQL keeps changing , you need to build a logic to automate populating the new SQL)
In the Datastage Sequencer , use a Execute Command Activity to open the SQL file
eg : cat /home/bk/query.sql
In the job activity which calls your generic job . you should map the command output of your EC activity to a job parameter
so if EC activity name is exec_query , then the job parameter will be
exec_query.$CommandOuput
When you run the sequence , your query will flow from
SQL file --> EC activity-->Parameter in Job activity-->DB stage( query parameterised)
Has you thinked to invoke a shellscript who connect to database and execute the SQL script from the sequential job? You could use sqlplus to connect in the shellscript and read the file with the SQL and use it. To execute the shellscript from the sequential job use a ExecCommand Stage (sh, ./, ...), it depends from the interpreter.
Other way to solve this, depends of the modification degree of your SQL; you could invoke a routine base who handle the parameters and invokes your parallel job.
The principal problem that I think you could have, is the limit of the long of the variable where you could store the parameter.
Tell me what option you choose and I could help you more.

creating a vsam file using jcl

I am trying to create a VSAM file using IDCAMS utility in JCL. MAXCC code that it returns is 0000.
But the newly created vsam file is not displayed in the list when I try to list it using 3.4.
Can anyone help me on this.?
The code that I have used is :
//VSAM0001 JOB (ACCT),CLASS=A,MSGLEVEL=(1,1
// NOTIFY=&SYSUID,MSGCLASS=A
//STEP0001 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE CLUSTER -
(NAME(DOMAIN.MYFILE.MYVSAM) -
VOL(AGH419) -
KEYS(16 0) -
RECORDSIZE(120 120) -
INDEXED -
REUSE ) -
DATA -
(NAME(DOMAIN.MYFILE.MYVSAM.DATA) -
CISZ(8192) -
RECORDSIZE(120 120) -
FSPC(0 0) ) -
INDEX -
(NAME(DOMAIN.MYFILE.MYVSAM.INDEX) )
/*
While creating VSAM file using IDCAMS utility, you need to specify all the storage parameters that are required like CYL, TRK etc..If you miss out any of these parameters then JCL won't be able to know where to store the newly created VSAM. So, in DEFINE portion of your JCL file, supply all the necessary storage parameters and you are good to go. :) Hope this helps.!
As the author wrote in the comments:
Thank you all for the response. IBM's LookAt utility helped. I have not specified the CYL parameter which is required because of which I have got the INCORRECT SPECIFICATION OF SPACE ALLOCATION.Now it is working. – Kinjal Shah
All messages will have an id (e.g., IEA1235) that can be used when searching for what generated the message.
You need to specify space in your IDCAMS 'DEFINE'. Look in your manual (or online) for specifying space for VSAM clusters.

IDCAMS LISTCAT deleting VSAM file when next step is IEFBR14

I have a requirement wherein I need to check if a VSAM file exists or not. If it is not present then I need to create it like TEST.FILE2. My JCL is as :
//STEP01 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT ENTRIES('BRTEST.FILE1')
/*
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(,CATLG,DELETE),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
But a stange thing is happening. Whenever I execute this JCL, STEP001 return a return code as 004 even though the file is already present, and a new file is created in STEP02. So if I submit this JCL twice, a new file is created both the times. I am not able to understand how the file is getting deleted. And the strange thing is if I run the JCL without STEP02 then it gives MAXCC as 0 saying that the file was found in catalog.
I was able to achieve my requirement by following code, but would still like to understand why and how my VSAM file gets deleted for LISTCAT.
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(MOD,CATLG,CATLG),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
Here is the SYSPRINT when only STEP01 is executed:
IDCAMS SYSTEM SERVICES TIME: 03:47:44
LISTCAT ENTRIES('BRTEST.FILE1')
CLUSTER ------- BRTEST.FILE1
IN-CAT --- CATALOG.TEST03
DATA ------- BRTEST.FILE1.DATA
IN-CAT --- CATALOG.TEST03
INDEX ------ BRTEST.FILE1.INDEX
IN-CAT --- CATALOG.TEST03
IDCAMS SYSTEM SERVICES TIME: 03:47:44
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------1
DATA ------------------1
GDG -------------------0
INDEX -----------------1
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------3
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 0
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 0
And when both steps are executed:
IDCAMS SYSTEM SERVICES TIME: 03:48:35
LISTCAT ENTRIES('BRTEST.FILE1')
IDC3012I ENTRY BRTEST.FILE1 NOT FOUND
IDC3009I ** VSAM CATALOG RETURN CODE IS 8 - REASON CODE IS IGG0CLEG-42
IDC1566I ** BRTEST.FILE1 NOT LISTED
IDCAMS SYSTEM SERVICES TIME: 03:48:35
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------0
DATA ------------------0
GDG -------------------0
INDEX -----------------0
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------0
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 4
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 4
The value for ZOS390RL variable is z/OS 02.01.00 and ZENVIR is ISPF 7.1MVS TSO.
May have an answer for you. Didn't think of it because it is a VSAM dataset, and the way you are trying to do it is unusual (to me).
There is/was a product called UCC11. I think it is now marketed by Computer Associates, and called CA-11 (or somesuch). I think you are using this product or something similar at your site.
If executed at the beginning of a JOB it would look for files specified as NEW and CATLG, and look to see if there was an existing file of the same name in the catalog. If there was, the existing file would be deleted.
This would obviate the need for an initial IEFBR14 step to delete such files.
I think that you are using this product, or something similar. Your file is being automatically deleted when it exists, so your IDCAMS step, which is reading data from a file (even if it is SYSIN and DD *) is not known to the product, so your VSAM file is deleted before your IDCAMS step is run.
Changing the file to MOD as the initial disposition (MOD will add to an existing file and create a new file if none exists) will not cause such a product to delete the file.
Using the LIKE for a VSAM file will not obtain the CA-size and CI-SIZE from the model dataset. You will get default values for those, which may well impact on the performance of your programs. You cannot specify these values when defining a VSAM file in JCL. You also won't get buffer values from the model dataset, but you can specify those separately in the JCL (which you haven't).
Here is a description of what LIKE does for you: http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2b680/12.40?DT=20080604022956
The following attributes are copied from the model data set to the
new data set:
Data set organization
Record organization (RECORG) or
Record format (RECFM)
Record length (LRECL)
Key length (KEYLEN)
Key offset (KEYOFF)
Type, PDS, PDSE, basic format, extended format, large format, or HFS (DSNTYPE)
Space allocation (AVGREC and SPACE)
Unless you explicitly code the SPACE parameter for the new data set,
the system determines the space to be allocated for the new data
set by adding up the space allocated in the first three extents of the
model data set. Therefore, the space allocated for the new data set
will generally not match the space that was specified for the model
data set. Note that regardless of the units in which the model data
set was allocated, the new data set will be allocated in tracks. This
assumes that space was not specified on the JCL and is being picked up
from the model data set.
There are some other little "gotchas", like in the last paragraph, detailed in the link as well.
Unless you have strong reasons otherwise, I'd strongly suggest doing the whole thing in one IDCAMS step (as below).
I suspected it was going to be 1.12, 1.13 or 2.1 (2.01). IEFBR14 is, subtly, part of the OS now.
Exactly why you get this effect, I don't know. I don't have access to 2.1, so can't investigate myself.
IEFBR14 has changed, LIKE is not really intended for VSAM datasets (you'll get a lot of default values for things you may or may not want), it's not really a "usual" way to do this. See Suggestions below.
Try adding a DDname to your IDCAMS step which just references the VSAM dataset. See if that changes anything. Use that DDname in an IDCAMS statement. See if that changes anything.
Take all your results to your Sysprogs, and see if they can spot anything.
If not, it'll be PMR-time: http://www-01.ibm.com/support/docview.wss?uid=swg21507639
If you do raise a PMR, please update by adding an Answer with the resolution once you receive it.
Suggestions.
Find out how this task is done by other people at your site.
Have you tried using the VSAM file you have defined in that way? You should LISTCAT TEST.FILE1 and TEST.FILE2 and compare. If you look up LIKE in the JCL Reference, you will see that there are things which a VSAM DEFINE can do which you can't do for a VSAM file defined in the JCL using LIKE.
Unless there is some reason otherwise, I'd suggest you do the whole thing in one step with IDCAMS. See if the file exists, use IDCAM's IF to to test the CC from that and only DEFINE if the file does not exist. You can use a MODEL (for instance on your TEST.FILE2) to get everything which is similar to another file, and just override anything different that you need.
If you have a look here, http://pic.dhe.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.zos.r13.idai200%2Fdefclu.htm, you will find a number of Modal Commands for IDCAMS which will give you everything you need to define if it is not there, and do something different (set the Condition Code, for instance) if it is.
Please still supply the requested information. It is an interesting question on the face of it, which may have a simple solution. But even with a solution, I don't think it is what you want.
What you want to do can be done entirely in the IDCAMS step. You can inspect the return code from a previous operation (ie, the LISTCAT) and do something (like define a new cluster) if the code is greater than 0. If the 2nd step works, then set the MAXCC to return 0 to tell your JCL that this step completed OK (and to let your Ops folks know this too).
Look for the IDCAMS 'IF'.

Resize table unload output field in jcl

I have performed a bmcunld in jcl to direct the output to a dataset.
The problem is that that field had maximum size and I cant read the dataset afterwards being created because it issues teh following error message:
"Invalid Record Length"
This is a sample of my unload:
//A00BMC EXEC PROC=BMCUNLD,UTILID=%%JOBNAME,PARAM='NEW',COND=(0,NE),
// SUBSYS=subsys
//SYSREC DD DSN=datasetname,
// DISP=(NEW,CATLG),
// SPACE=(CYL,(10,10),RLSE),
// DCB=(RECFM=FB,LRECL=1000,BLKSIZE=0)
//SYSIN DD *
UNLOAD
DIRECT NO
SELECT a.data, a.codent, b.text
FROM owner.table_view A,owner.table2_view B
WHERE a.cmarca='S' AND a.cestado='P' AND A.codrc='OK'
AND DATE(A.data) > CURRENT DATE - 2 DAYS
AND B.cmarca = A.cmarca
AND B.chave = A.data
WITH UR;
Can this problem only be solved by using this dataset as an input to a SORT with an OUTREC PARSE or can I solve the problem directçy in the query?
Your BLKSIZE looks odd - you have RECFM=FB,LRECL=1000,BLKSIZE=0
At a minimum I would expect RECFM=FB,LRECL=1000,BLKSIZE=1000
You might get away with not specifying BLKSIZE at all (by not giving one) but generally BLKSIZE needs to be some non-zero multiple of LRECL. Typically, especially if your shop has System Managed Storage (SMS), the system itself will assign an optimal BLKSIZE suitable for the file you are creating. Your example however where BLKSIZE=0 doesn't even permit one record to be stored in a block.