Loading packed decimals into DB2 for IBM i - db2

I am having trouble loading packed data exported from IBM i DB2 into a second IBM i DB2 database. The file comes from a vendor and other kinds of methods like direct connections are not available.
Here are the contents of small test file. Note that I've added some spaces before the last two bytes that represent the packed data (404F):
dspf /home/person/test_file.ebcdic
************Beginning of data**************
11111111111111111 |
************End of Data********************
Viewing the hex shows:
F1F1F1F1 F1F1F1F1 F1F1F1F1 F1F1F1F1 F10000 404F 11111111111111111
I create the table:
DROP TABLE IF EXISTS TESTSCHEMA.TESTTABLE ;
CREATE TABLE TESTSCHEMA.TESTTABLE (
CHARFLD1 CHAR(3),
CHARFLD2 CHAR(12),
CHARFLD3 CHAR(1),
CHARFLD4 CHAR(1),
BINFIELD1 BINARY(1),
BINFIELD2 BINARY(1),
PACKFIELD DECIMAL(3, 0)
) ORGANIZE BY ROW;
I create a field definition file:
*COL 1 3 0
*COL 4 15 0
*COL 16 16 0
*COL 17 17 0
*COL 18 18 0
*COL 19 19 0
*COL 20 21 0
*END
Try to load the data:
CL: CHGATR OBJ('/home/person/test_file.ebcdic') ATR(*CCSID) VALUE(37);
CL: CPYFRMIMPF FROMSTMF('/home/person/test_file.ebcdic')
TOFILE(TESTSCHEMA/TESTTABLE)
FROMCCSID(37) TOCCSID(37)
DTAFMT(*FIXED)
STMFLEN(21)
MBROPT(*REPLACE)
FROMRCD(*FIRST 1)
FLDDFNFILE(TESTLIB/TESTFILE TMEMBER)
;
But I just get an error:
Member TESTTABLE file TESTTABLE in TESTCHEMA cleared.
The copy did not complete for reason code 7.
0 records copied to member TESTTABLE.
Copy command ended because of error.
7 - The FROMFILE numeric field PACKFIELD contains blank characters, or
other characters that are not valid for a numeric field.
It seems like the system is ignoring the actual packed bytes and trying to load the characters, which in fact display as blank. If I change the CPYFRMIMPF slightly to ignore the packed decmial by using STMFLEN(19), the character and binary fields load without issue and as expected. To load packed decimals into a different DB2 system on linux, I need to use a packeddecimal option in the db2 load command. But CPYFRMIMPF doesn't seem to provide such an option. Is there another way to go about this?

"Import Files" aren't design to contain packed data; they are supposed to be simple text files to allow data to be transferred between IBM i and non-IBM i systems.
CPYTOIMPF will unpack the data in a table.
CPYFRMIMPF will pack the data as it's loaded to a table.
If it were me, I'd kick the file back to the vendor and tell them to give you something useable. I'd also seriously reconsider working with the vendor in the first place.
But if you're stuck. Try using FTP in binary mode to transfer the data directly into the table.
The other option, is to use the older Copy From Stream File (CPYFRMSTMF) to move the data into a temporary PF created with CRTPF RCDLEN(120). Then use Copy File (CPYF) command with FMTOPT(*NOCHK) to move the data into your actual table.
If none of that works, you'd need to write an RPG/COBOL program that uses the IFS APIs to read the stream file and move the data to your table.

Related

Import CSV file to mariadb

Lately, I am facing problems importing from CSV files. I am using
MariaDB : 10.3.32-MariaDB-0ubuntu0.20.04.1
On Ubuntu : Ubuntu 20.04.3 LTS
I am using this command
LOAD DATA LOCAL INFILE '/path_to_file/data.csv' INTO TABLE tab
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES;
After searching and trying I found that I can only load File from tmp folder. i.e.
SELECT load_file('/tmp/data.csv');
But it didn't work on other paths.
And secondly, I found that even If the CSV file is present in tmp folder; If it contains a lot of fields then again MariaDB would fail to load. The main problem is that LOAD DATA command does not give any type of error or even warning; except if the file does not exist. Other than that nothing is shown. And nothing is imported.
I only succeeded to import very simple CSV from tmp folder
What I Suspected is that
MariaDB had been updated and in this new version there are some flags or configuration options that prohibit MariaDB from importing CSV files from other than tmp folder and
MariaDB would fail to load CSV because of some unknown problem, Maybe some special character (which I made sure nothing is in there).
There must be some option that makes MariaDB produce verbose error and warning log. Which I didn't know. Except for /var/log/mysql/error.log file. which does not contain any info containing failed to load CSV.
Any help would be appreciated.
Below is the first record of CSV. Actual CSV contains 49 fields and 1862 records (but the below sample contains only one record)
"S.No","Training Code","Intervention Type (NRM/Emp. Skill
Training)","Training Title/Activity","Start Month","Ending
Month","No. of Days Trainings","Start Date Training","End Date
Training","Name of Person","Father
Name","CNIC","Gender","Age","Education","Skill Level","CO Ref
#","COName","Village Name","Tehsil Name","District","Type of Farm
production","Total Land (if applicable)","Total Trees (if
applicable)","Sheeps/goats","Buffalo/Cows","Profession","Person's
Income Emp. Skill (Pre-Intervention)","Income from NRM (Pre-
Intervention)","HH Other Sources of Income","Total HH Income","Type
of Support provided","Tool Kit/Inputs Received or Not","Date of
Tool Kit receiving","Other intervention , like exposure market
trial, followup support, Advance Training etc","Production (Pre-
Intervention)","Production (Post-Intervention)","Change in
Production","Unit (kg, Maund,Liter, etc)","Income gain from
production (Post-Intervention)","Change in Income (NRM)","Income
gain by Employment -Emp.Skill (Post Intervention)","Change in
Income (Emp. Skill)","Outcome Trend","Employment/Self-
Employment/Other","Outcome Result","Remarks","Beneficiaries Contact
No.","Activity Location"
1,"AUP-0001","NRM","Dates Processing &
Packaging","Sep/2018","Sep/2018",2,"25/Sep/2018","26/Sep/2018",
"Some name","Barkat Gul",1234567891234,"Male",34,"Primary","Semi-
Skilled","AUP-NWD-073","MCO Haider Khel Welfare Committee","Haider
Khel","Mir Ali","North
Waziristan","Dates",,20,,,"Farming",,5000,"Farming",5000,"Training,
Packaging Boxes","Yes","10/10/2018",,180,320,140,"Kg",8000,3000,,,
"Positive","Self Employed","Value addition to the end product
(Packaging increase the Price per KG to 25%)",,,"Field NW"
BTW am NON-Technical :-O
While using Mariadb version 10.5.13-3.12.1 am able to import CSV files into Tables have set up.
Except with dates,
https://dba.stackexchange.com/questions/283966/tradedate-import-tinytext-how-to-show-date-format-yyyymmdd-of-20210111-or-2021?noredirect=1#comment555600_283966
There am still struggling to import text-format-dates AND to convert text-dates into the (YYYY-MM-DD) date format.
end.

Splayed table upsert leading to error: `cast

I built a data loader prototype that saves CSV into splayed tables. The workflow is as follows:
Create schema the first time e.g. volatilitysurface table:
volatilitysurface::([date:`datetime$(); ccypair:`symbol$()] atm_convention:`symbol$(); premium_included:`boolean$(); smile_type:`symbol$(); vs_type:`symbol$(); delta_ratio:`float$(); delta_setting:`float$(); wing_extrapolation:`float$(); spread_type:`symbol$());
For every file in the rawdata folder import it:
myfiles:#[system;"dir /b /o:gn ",string `$getenv[`KDBRAWDATA],"*.volatilitysurface.csv 2> nul";()];
if[myfiles~();.lg.o[`load;"no volatilitysurface files found!"];:0N];
.lg.o[`load;"loading data files ..."];
/ load each file
{
mypath:"" sv (string `$getenv[`KDBRAWDATA];x);
.lg.o[`load;"loading file name '",mypath,"' ..."];
myfile:hsym`$mypath;
tmp1:select date,ccypair,atm_convention,premium_included,smile_type,vs_type,delta_ratio,delta_setting,wing_extrapolation,spread_type from update date:x, premium_included:?[premium_included = `$"true";1b;0b] from ("ZSSSSSFFFS";enlist ",")0:myfile;
`volatilitysurface upsert tmp1;
} #/: myfiles;
delete tmp1 from `.;
.Q.gc[];
.lg.o[`done;"loading volatilitysurface data done"];
.lg.o[`save;"saving volatilitysurface schema to ",string afolder];
volatilitysurface::0!volatilitysurface;
.Q.dpft[afolder;`;`ccypair;`volatilitysurface];
.lg.o[`cleanup;"removing volatilitysurface from memory"];
delete volatilitysurface from `.;
.Q.gc[];
.lg.o[`done;"saving volatilitysurface schema done"];
This works perfectly. I use .Q.gc[]; frequently to avoid hitting the wsfull. When new CSV files are available I open the existing schema, upsert into it and save it again effectively overwriting the existing HDB file system.
Open schema:
.lg.o[`open;"tables already exists, opening the schema ..."];
#[system;"l ",(string afolder) _ 0;{.lg.e[`open;"failed to load hdb directory: ", x]; 'x}];
/ Re-create table index
volatilitysurface::`date`ccypair xkey select from volatilitysurface;
Re-run step #2 to append new CSV files into the existing volatilitysurfacetable, it upserts the first CSV perfectly but the second CSV fails with:
error: `cast
I debug to the point of the error and to double-check I see that the metadata of tmp1 and volatilitysurface are perfectly the same. Any ideas why this is happening? I get the same issue with any other table. I have tried cleaning the keys from the table after every upsert but doesn't help i.e.
volatilitysurface::0!volatilitysurface;
volatilitysurface::`date`ccypair xkey volatilitysurface;
And the metadata comparison at the point of the cast error:
meta tmp1
c | t f a
------------------| -----
date | z
ccypair | s
atm_convention | s
premium_included | b
smile_type | s
vs_type | s
delta_ratio | f
delta_setting | f
wing_extrapolation| f
spread_type | s
meta volatilitysurface
c | t f a
------------------| -----
date | z
ccypair | s p
atm_convention | s
premium_included | b
smile_type | s
vs_type | s
delta_ratio | f
delta_setting | f
wing_extrapolation| f
spread_type | s
UPDATE Using the input of the answer below I tried using Torq's .loader.loadallfiles function like this (it doesn't fail but nothing happens either, the table is not created in memory and the data is not written to the database):
.loader.loadallfiles[`headers`types`separator`tablename`dbdir`dataprocessfunc!(`x`ccypair`atm_convention`premium_included`smile_type`vs_type`delta_ratio`delta_setting`wing_extrapolation`spread_type;"ZSSSSSFFFS";enlist ",";`volatilitysurface;`:hdb; {[p;t] select date,ccypair,atm_convention,premium_included,smile_type,vs_type,delta_ratio,delta_setting,wing_extrapolation,spread_type from update date:x, premium_included:?[premium_included = `$"true";1b;0b] from t}); `:rawdata]
UDPATE2 This is the output I get from TorQ:
2017.11.20D08:46:12.550618000|wsp18497wn|dataloader|dataloader1|INF|dataloader|**** LOADING :rawdata/20171102_113420.disccurve.csv ****
2017.11.20D08:46:12.550618000|wsp18497wn|dataloader|dataloader1|INF|dataloader|reading in data chunk
2017.11.20D08:46:12.566218000|wsp18497wn|dataloader|dataloader1|INF|dataloader|Read 10000 rows
2017.11.20D08:46:12.566218000|wsp18497wn|dataloader|dataloader1|INF|dataloader|processing data
2017.11.20D08:46:12.566218000|wsp18497wn|dataloader|dataloader1|INF|dataloader|Enumerating
2017.11.20D08:46:12.566218000|wsp18497wn|dataloader|dataloader1|INF|dataloader|writing 4525 rows to :hdb/2017.09.12/volatilitysurface/
2017.11.20D08:46:12.581819000|wsp18497wn|dataloader|dataloader1|INF|dataloader|writing 4744 rows to :hdb/2017.09.13/volatilitysurface/
2017.11.20D08:46:12.659823000|wsp18497wn|dataloader|dataloader1|INF|dataloader|writing 731 rows to :hdb/2017.09.14/volatilitysurface/
2017.11.20D08:46:12.737827000|wsp18497wn|dataloader|dataloader1|INF|init|retrieving sort settings from :C:/Dev/torq//config/sort.csv
2017.11.20D08:46:12.737827000|wsp18497wn|dataloader|dataloader1|INF|sort|sorting the volatilitysurface table
2017.11.20D08:46:12.737827000|wsp18497wn|dataloader|dataloader1|INF|sorttab|No sort parameters have been specified for : volatilitysurface. Using default parameters
2017.11.20D08:46:12.737827000|wsp18497wn|dataloader|dataloader1|INF|sortfunction|sorting :hdb/2017.09.05/volatilitysurface/ by these columns : sym, time
2017.11.20D08:46:12.753428000|wsp18497wn|dataloader|dataloader1|ERR|sortfunction|failed to sort :hdb/2017.09.05/volatilitysurface/ by these columns : sym, time. The error was: hdb/2017.09.
I get the following error sorttab|No sort parameters have been specified for : volatilitysurface. Using default parameters where is this sorttab documented? does it use the table PK by default?
UPDATE3 Ok fixed UPDATE2 out by providing a non-default sort.csv under my config folder:
tabname,att,column,sort
default,p,sym,1
default,,time,1
volatilitysurface,,date,1
volatilitysurface,,ccypair,1
But now I see that if I call the function multiple times on the same files, it simply appends duplicated data instead of upserting it.
UPDATE4 Still not there yet ... assuming I can check to make sure that no duplicate file is used. When I load and then start the database I get some structure back that ressembles some sort of dictionary and not a table.
2017.10.31| (,`volatilitysurface)!,+`date`ccypair`atm_convention`premium_incl..
2017.11.01| (,`volatilitysurface)!,+`date`ccypair`atm_convention`premium_incl..
2017.11.02| (,`volatilitysurface)!,+`date`ccypair`atm_convention`premium_incl..
2017.11.03| (,`volatilitysurface)!,+`date`ccypair`atm_convention`premium_incl..
sym | `AUDNOK`AUDCNH`AUDJPY`AUDHKD`AUDCHF`AUDSGD`AUDCAD`AUDDKK`CADSGD`C..
Note that date is actually datetime Z and not just date. My full and latest version of the function invocation is:
target:hsym `$("" sv ("./";getenv[`KDBHDB];"/volatilitysurface"));
rawdatadir:hsym `$getenv[`KDBRAWDATA];
.loader.loadallfiles[`headers`types`separator`tablename`dbdir`partitioncol`dataprocessfunc!(`x`ccypair`atm_convention`premium_included`smile_type`vs_type`delta_ratio`delta_setting`wing_extrapolation`spread_type;"ZSSSSSFFFS";enlist ",";`volatilitysurface;target;`date;{[p;t] select date,ccypair,atm_convention,premium_included,smile_type,vs_type,delta_ratio,delta_setting,wing_extrapolation,spread_type from update date:x, premium_included:?[premium_included = `$"true";1b;0b] from t}); rawdatadir];
I'm going to add a second answer here to try and tackle the question about using TorQ's data loader.
I'd like to clarify what output you are getting after running this function? There should be some logging messages output, can you post these? For example when I run the function:
jmcmurray#homer ~/deploy/TorQ (master) $ q torq.q -procname loader -proctype loader -debug
<torq startup messages removed>
q).loader.loadallfiles[`headers`types`separator`tablename`dbdir`partitioncol`dataprocessfunc!(c;"TSSFJFFJJBS";enlist",";`quotes;`:testdb;`date;{[p;t] select date:.z.d,time:TIME,sym:INSTRUMENT,BID,ASK from t});`:csvtest]
2017.11.17D15:03:20.312336000|homer.aquaq.co.uk|loader|loader|INF|dataloader|**** LOADING :csvtest/tradesandquotes20140421.csv ****
2017.11.17D15:03:20.319110000|homer.aquaq.co.uk|loader|loader|INF|dataloader|reading in data chunk
2017.11.17D15:03:20.339414000|homer.aquaq.co.uk|loader|loader|INF|dataloader|Read 11000 rows
2017.11.17D15:03:20.339463000|homer.aquaq.co.uk|loader|loader|INF|dataloader|processing data
2017.11.17D15:03:20.339519000|homer.aquaq.co.uk|loader|loader|INF|dataloader|Enumerating
2017.11.17D15:03:20.340061000|homer.aquaq.co.uk|loader|loader|INF|dataloader|writing 11000 rows to :testdb/2017.11.17/quotes/
2017.11.17D15:03:20.341669000|homer.aquaq.co.uk|loader|loader|INF|dataloader|**** LOADING :csvtest/tradesandquotes20140422.csv ****
2017.11.17D15:03:20.349606000|homer.aquaq.co.uk|loader|loader|INF|dataloader|reading in data chunk
2017.11.17D15:03:20.370793000|homer.aquaq.co.uk|loader|loader|INF|dataloader|Read 11000 rows
2017.11.17D15:03:20.370858000|homer.aquaq.co.uk|loader|loader|INF|dataloader|processing data
2017.11.17D15:03:20.370911000|homer.aquaq.co.uk|loader|loader|INF|dataloader|Enumerating
2017.11.17D15:03:20.371441000|homer.aquaq.co.uk|loader|loader|INF|dataloader|writing 11000 rows to :testdb/2017.11.17/quotes/
2017.11.17D15:03:20.460118000|homer.aquaq.co.uk|loader|loader|INF|init|retrieving sort settings from :/home/jmcmurray/deploy/TorQ/config/sort.csv
2017.11.17D15:03:20.466690000|homer.aquaq.co.uk|loader|loader|INF|sort|sorting the quotes table
2017.11.17D15:03:20.466763000|homer.aquaq.co.uk|loader|loader|INF|sorttab|No sort parameters have been specified for : quotes. Using default parameters
2017.11.17D15:03:20.466820000|homer.aquaq.co.uk|loader|loader|INF|sortfunction|sorting :testdb/2017.11.17/quotes/ by these columns : sym, time
2017.11.17D15:03:20.527216000|homer.aquaq.co.uk|loader|loader|INF|applyattr|applying p attr to the sym column in :testdb/2017.11.17/quotes/
2017.11.17D15:03:20.535095000|homer.aquaq.co.uk|loader|loader|INF|sort|finished sorting the quotes table
After all this, I can run \l testdb and there is a table called "quotes" containing my loaded data
If you can post logging messages like these, it could be helpful to see what's going on.
UPDATE
"But now I see that if I call the function multiple times on the same files, it simply appends duplicated data instead of upserting it."
If I'm understanding the problem correctly, it sounds like you likely shouldn't call the function multiple times on the same files. Another process within TorQ could be useful here, the "file alerter". This process will monitor a directory for new & updated files, and can call a function on any that appear (so you can have it call the loader function with every new file automatically). It has a number of options such as moving files after processing (so you can "archive" loaded CSVs)
Note that the file alerter requires that a function take exactly two parameters - the directory & the file name. This effectively means you will need a "wrapper" function around the loader function, which takes a dictionary & a directory. I don't think TorQ includes a function similar to .loader.loadallfiles for a single file, so it might be necessary to copy the target file to a temporary directory, run loadallfiles on that directory and then delete the file from there before loading the next.
`cast error refers to a value not being enumerated
I can't see any enumeration going on here, splayed tables on disk need to have symbol columns enumerated. For example, this can be done with the following line, before calling .Q.dpft
volatilitysurface:.Q.en[afolder;volatilitysurface];
You may like to consider using an example CSV loader for loading your data. One such example is included in TorQ, the KDB framework developed by AquaQ Analytics (as a disclaimer, I work for AquaQ)
The framework is available (free of charge) here: https://github.com/AquaQAnalytics/TorQ
The specific component you will likely be interested in is dataloader.q and is documented here: http://aquaqanalytics.github.io/TorQ/utilities/#dataloaderq
This script will handle everything necessary, loading all files, enumerating, sorting on disk, applying attributes etc. as well as using .Q.fsn to prevent running out of memory

Why does Open XML API Import Text Formatted Column Cell Rows Differently For Every Row

I am working on an ingestion feature that will take a strongly formatted .xlsx file and import the records to a temp storage table and then process the rows to create db records.
One of the columns is strictly formatted as "Text" but it seems like the Open XML API handles the columns cells differently on a row-by-row basis. Some of the values while appearing to be numeric values are truly not (which is why we format the column as Text) -
some examples are "211377", "211727.01", "209395.388", "209395.435"
what these values represent is not important but what happens is that some values (using the Open XML API v2.5 library) will be read in properly as text whether retrieved from the Shared Strings collection or simply from InnerXML property while others get sucked in as numbers with what appears to be appended rounding or precision.
For example the "211377", "211727.01" and "209395.435" all come in exactly as they are in the spreadsheet but the "209395.388" value is being pulled in as "209395.38800000001" (there are others that this happens to as well).
There seems to be no rhyme or reason to which values get messed up and which ones which import fine. What is really frustrating is that if I use the native Import feature in SQL Server Management Studio and ingest the same spreadsheet to a temp table this does not happen - so how is that the SSMS import can handle these values as purely text for all rows but the Open XML API cannot.
To begin the answer you main problem seems to be values,
"209395.388" value is being pulled in as "209395.38800000001"
Yes in .xlsx file value is stored as 209395.38800000001 instead of 209395.388. And it's the correct format to store floating point numbers; nothing wrong in it. You van simply confirm it by following code snippet
string val = "209395.38800000001"; // <= What we extract from Open Xml
Console.WriteLine(double.Parse(val)); // < = Simply pass it to double and print
The output is :
209395.388 // <= yes the expected value
So there's nothing wrong in the value you extract from .xlsx using Open Xml SDK.
Now to cells, yes cell can have verity of formats. Numbers, text, boleans or shared string text. And you can styles to a cell which would format your string to a desired output in Excel. (Ex - Date Time format, Forced strings etc.). And this the way Excel handle the vast verity of data. It need this kind of formatting and .xlsx file format had to be little complex to support all.
My advice is to use a proper parse method set at extracted values to identify what format it represent (For example to determine whether its a number or a text) and apply what type of parse.
ex : -
string val = "209395.38800000001";
Console.WriteLine(float.Parse(val)); // <= Float parse will be deduce a different value ; 209395.4
Update :
Here's how value is saved in internal XML
Try for yourself ;
Make an .xlsx file with value 209395.388 -> Change extention to .zip -> Unzip it -> goto worksheet folder -> open Sheet1
You will notice that value is stored as 209395.38800000001 as scene in attached image.. So nothing wrong on API for extracting stored number. It's your duty to decide what format to apply.
But if you make the whole column Text before adding data, you will see that .xlsx hold data as it is; simply said as string.

IDCAMS LISTCAT deleting VSAM file when next step is IEFBR14

I have a requirement wherein I need to check if a VSAM file exists or not. If it is not present then I need to create it like TEST.FILE2. My JCL is as :
//STEP01 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT ENTRIES('BRTEST.FILE1')
/*
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(,CATLG,DELETE),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
But a stange thing is happening. Whenever I execute this JCL, STEP001 return a return code as 004 even though the file is already present, and a new file is created in STEP02. So if I submit this JCL twice, a new file is created both the times. I am not able to understand how the file is getting deleted. And the strange thing is if I run the JCL without STEP02 then it gives MAXCC as 0 saying that the file was found in catalog.
I was able to achieve my requirement by following code, but would still like to understand why and how my VSAM file gets deleted for LISTCAT.
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(MOD,CATLG,CATLG),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
Here is the SYSPRINT when only STEP01 is executed:
IDCAMS SYSTEM SERVICES TIME: 03:47:44
LISTCAT ENTRIES('BRTEST.FILE1')
CLUSTER ------- BRTEST.FILE1
IN-CAT --- CATALOG.TEST03
DATA ------- BRTEST.FILE1.DATA
IN-CAT --- CATALOG.TEST03
INDEX ------ BRTEST.FILE1.INDEX
IN-CAT --- CATALOG.TEST03
IDCAMS SYSTEM SERVICES TIME: 03:47:44
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------1
DATA ------------------1
GDG -------------------0
INDEX -----------------1
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------3
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 0
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 0
And when both steps are executed:
IDCAMS SYSTEM SERVICES TIME: 03:48:35
LISTCAT ENTRIES('BRTEST.FILE1')
IDC3012I ENTRY BRTEST.FILE1 NOT FOUND
IDC3009I ** VSAM CATALOG RETURN CODE IS 8 - REASON CODE IS IGG0CLEG-42
IDC1566I ** BRTEST.FILE1 NOT LISTED
IDCAMS SYSTEM SERVICES TIME: 03:48:35
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------0
DATA ------------------0
GDG -------------------0
INDEX -----------------0
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------0
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 4
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 4
The value for ZOS390RL variable is z/OS 02.01.00 and ZENVIR is ISPF 7.1MVS TSO.
May have an answer for you. Didn't think of it because it is a VSAM dataset, and the way you are trying to do it is unusual (to me).
There is/was a product called UCC11. I think it is now marketed by Computer Associates, and called CA-11 (or somesuch). I think you are using this product or something similar at your site.
If executed at the beginning of a JOB it would look for files specified as NEW and CATLG, and look to see if there was an existing file of the same name in the catalog. If there was, the existing file would be deleted.
This would obviate the need for an initial IEFBR14 step to delete such files.
I think that you are using this product, or something similar. Your file is being automatically deleted when it exists, so your IDCAMS step, which is reading data from a file (even if it is SYSIN and DD *) is not known to the product, so your VSAM file is deleted before your IDCAMS step is run.
Changing the file to MOD as the initial disposition (MOD will add to an existing file and create a new file if none exists) will not cause such a product to delete the file.
Using the LIKE for a VSAM file will not obtain the CA-size and CI-SIZE from the model dataset. You will get default values for those, which may well impact on the performance of your programs. You cannot specify these values when defining a VSAM file in JCL. You also won't get buffer values from the model dataset, but you can specify those separately in the JCL (which you haven't).
Here is a description of what LIKE does for you: http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2b680/12.40?DT=20080604022956
The following attributes are copied from the model data set to the
new data set:
Data set organization
Record organization (RECORG) or
Record format (RECFM)
Record length (LRECL)
Key length (KEYLEN)
Key offset (KEYOFF)
Type, PDS, PDSE, basic format, extended format, large format, or HFS (DSNTYPE)
Space allocation (AVGREC and SPACE)
Unless you explicitly code the SPACE parameter for the new data set,
the system determines the space to be allocated for the new data
set by adding up the space allocated in the first three extents of the
model data set. Therefore, the space allocated for the new data set
will generally not match the space that was specified for the model
data set. Note that regardless of the units in which the model data
set was allocated, the new data set will be allocated in tracks. This
assumes that space was not specified on the JCL and is being picked up
from the model data set.
There are some other little "gotchas", like in the last paragraph, detailed in the link as well.
Unless you have strong reasons otherwise, I'd strongly suggest doing the whole thing in one IDCAMS step (as below).
I suspected it was going to be 1.12, 1.13 or 2.1 (2.01). IEFBR14 is, subtly, part of the OS now.
Exactly why you get this effect, I don't know. I don't have access to 2.1, so can't investigate myself.
IEFBR14 has changed, LIKE is not really intended for VSAM datasets (you'll get a lot of default values for things you may or may not want), it's not really a "usual" way to do this. See Suggestions below.
Try adding a DDname to your IDCAMS step which just references the VSAM dataset. See if that changes anything. Use that DDname in an IDCAMS statement. See if that changes anything.
Take all your results to your Sysprogs, and see if they can spot anything.
If not, it'll be PMR-time: http://www-01.ibm.com/support/docview.wss?uid=swg21507639
If you do raise a PMR, please update by adding an Answer with the resolution once you receive it.
Suggestions.
Find out how this task is done by other people at your site.
Have you tried using the VSAM file you have defined in that way? You should LISTCAT TEST.FILE1 and TEST.FILE2 and compare. If you look up LIKE in the JCL Reference, you will see that there are things which a VSAM DEFINE can do which you can't do for a VSAM file defined in the JCL using LIKE.
Unless there is some reason otherwise, I'd suggest you do the whole thing in one step with IDCAMS. See if the file exists, use IDCAM's IF to to test the CC from that and only DEFINE if the file does not exist. You can use a MODEL (for instance on your TEST.FILE2) to get everything which is similar to another file, and just override anything different that you need.
If you have a look here, http://pic.dhe.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.zos.r13.idai200%2Fdefclu.htm, you will find a number of Modal Commands for IDCAMS which will give you everything you need to define if it is not there, and do something different (set the Condition Code, for instance) if it is.
Please still supply the requested information. It is an interesting question on the face of it, which may have a simple solution. But even with a solution, I don't think it is what you want.
What you want to do can be done entirely in the IDCAMS step. You can inspect the return code from a previous operation (ie, the LISTCAT) and do something (like define a new cluster) if the code is greater than 0. If the 2nd step works, then set the MAXCC to return 0 to tell your JCL that this step completed OK (and to let your Ops folks know this too).
Look for the IDCAMS 'IF'.

How to insert unicode string in PowerBuilder when using datastore/datawindow?

The SQL Insert statement below is used to insert unicode string and it is working successfully when executed in SQL Server Management Studio or Query Analyzer.
Column Specs:
SONUM VARCHAR(50)
CONTRACTNUM NVARCHAR(150)
FNAME VARCHAR(70)
INSERT INTO SCH_EDI_3B12RHDR ( SONUM, CONTRACTNUM, FNAME )
VALUES ( 'DPH11309160073CC' , N'Globe MUX Project(客户合同号:NA)' , 'TEST' )
Is it possible to implement the prefix N when using a datastore/datawindow for insert operation? If yes, how? Below is the current script in PB which successfully insert the data but the chinese character/s was replaced by '?'(question mark).
ls_sonum = String(dw_1.Object.shipmentOrderNum[1]) //This holds the value : DPH11309160073CC
ls_chinesechar = String(dw_1.Object.contractnum[1]) // This holds the value : Globe MUX Project(客户合同号:NA)
dw_1.SetItem(1,'sonum',ls_sonum)
dw_1.SetItem(1,'contractnum',ls_chinesechar)
dw_1.SetItem(1,'fname','TEST')
dw.AcceptText( )
IF dw.Update( ) = 1 THEN
Commit Using SQLCA ;
END IF
You need to hook yourself on the sqlpreview event of dw/ds and add the N' yourself. Or if you use static (or disable?) bind you don't need to do anything at all.
It's all explained in the online docs.
I verified that the OLE DB driver for PB version 10 supports Unicode. You can find the information in this Sybase document on page nine.
ODBC/ JDBC /OLEDB/ ADO.NET DATABASE SUPPORT
Client/Server PowerBuilder 10 applications can exchange Unicode and ANSI data with databases
supported by the ODBC, JDBC, OleDB, and ADO.NET database interfaces without requiring
special settings.
Suggestion would be to try forcing a different encoding when using the string function, there are four choices (see code example below)
The encoding parameter is one of the following:
■ EncodingANSI!
■ Encoding UTF8!
■ EncodingUTF16LE! – UTF-16 Little Endian encoding (PowerBuilder 10 default)
■ EncodingUTF16BE! – UTF-16 Big Endian encoding
// use the correct encoding for your actual data
ls_sonum = String(dw_1.Object.shipmentOrderNum[1], EncodingUTF16BE! ) //holds DPH11309160073CC
ls_chinesechar = String(dw_1.Object.contractnum[1], EncodingANSI!) // holds Globe MUX Project(客户合同号:NA)
It is possible that I'm leading you down the wrong path but I think this would be worth trying out and may solve the problem.