How to use DISP=SHR on an fopen - fopen

With code like:
fopen("DD:LOGLIBY(L1234567)", "w");
and JCL like:
//LOGTEST EXEC PGM=LOGTEST
//LOGLIBY DD DSN=MYUSER.LOG.LIBY,DISP=SHR
I can create PDS(E) members while at the same time browsing the PDS(E) to look at existing members, as expected with DISP=SHR.
If instead I code:
fopen("//'MYUSER.LOG.LIBY(L1234567)'", "w");
The fopen fails if I am browsing the PDS(E) at the time, or the browse of the PDS(E) fails while I have the file open. In other words there is no DISP=SHR. According to the fopen() documentation, DISP=SHR is the default when using file modes of "r" etc, but not "w".
How can I provide DISP=SHR in the second example?

There are two possibilities...
For anyone not familiar with the internal structure of partitioned datasets (that is, PDS or PDS/E), these datasets are logically divided into two parts: a "directory" having pointers to all the individual members, and a "data" area containing the actual records for the individual members:
PDS: <DIRECTORY BLOCKS>
<MEMBER1>: ADDRESS OF DATA FOR MEMBER1 (xxx)
<MEMBER2>: ADDRESS OF DATA FOR MEMBER2 (yyy)
...
<DIRECTORY FREESPACE)
...
<EOF - END OF THE PDS DIRECTORY>
<DATA PORTION>
+xxx = DATA FOR MEMBER1
...
<EOF - END OF MEMBER1>
+yyy = DATA FOR MEMBER2
...
<EOF - END OF MEMBER2>
...
FREE SPACE (ALLOCATED, BUT UNUSED)
...
END OF PDS
Throughout the next few paragraphs, keep in mind that you can open either the entire PDS/PDSE, and this enables you to read/write whatever members you like, or you can allocate and open a single member, and that gets processed like any other sequential file.
First, if you actually to have a DD statement coded as you show in the question, then you may simply need to change your open from fopen(dsn,...) to fopen(dd:ddname,...). If you're running under the UNIX Shell or you do something that results in your process running in a different address space (such as fork()), then this might not work, but it may be worth a try. If you do this with the JCL you show, the challenge would be managing the PDS/E directory - you'd need to issue your own "STOW" when you create/update a new member since the JCL allocates the entire dataset, not just a single member. The sequence would be:
Open the DD for output.
Write your data.
Update the PDS or PDS/E directory with new member information (this is where the STOW function comes in - it updates the directory of the PDS/PDSE to reflect the member you created or updated).
Close the file
If you also need to read members, you'd need to issue FIND (or BLDL/POINT - which can be fseek() in C) to point to the correct member, then read the member. I'm sure it sounds like a hassle, but the advantage of this approach is that you can allocate/open the file once, and process as many individual members as you like.
A second workaround might be to dynamically allocate the file yourself, and then open it using DD:ddname syntax...if you only infrequently access the file, this is probably easier to code. The gory details of dynamic allocation are fully described here: https://www.ibm.com/support/knowledgecenter/SSLTBW_2.4.0/com.ibm.zos.v2r4.ieaa800/reqsvc.htm.
There are several ways to invoke dynamic allocation: you can write a small assembler program, you can use the z/OS UNIX Services BPXWDYN callable service, or you can use the C runtime "dynalloc()" or "svc99()" functions. The dynalloc() function is easy to use, but it only exposes a subset of what dynamic allocation can do...svc99() is more cumbersome to use, but it exposes more functionality.
However you do it, dynamic allocation takes "text units" that roughly correspond to the parameters you find on JCL DD statements. What you're describing sounds like you'd just need to pass DSN and DISP text units, and maybe DDNAME (you can either pass your own DDNAME, or let the system generate one for you).
The C runtime functions make all this easy, but be aware that there are a few oddities, such as the need to pad the parameters to their maximum length. For example, a DSN needs to be 44 characters and padded on the right with blanks - not a C-style null-terminated string.
Here's a small code snippet as an example:
#include <dynit.h>
. . .
int allocate(ddn, dsn, mem)
{
__dyn_t ip; // Parameters to dynalloc()
. . .
// Prepare the parameters to dynalloc()
dyninit(&ip); // Initialize the parameters
ip.__ddname = ddn; // 8-char blank-padded
ip.__dsname = dsn; // 44-char blank-padded
ip.__status = __DISP_SHR; // DISP=(SHR)
ip.__normdisp = __DISP_KEEP; // DISP=(...,KEEP)
ip.__misc_flags = __CLOSE; // FREE=CLOSE
if (*mem) // Optional PDS, PDS/E member
ip.__member = mem; // 8-char blank-padded
// Now we can call dynalloc()...
if (dynalloc(&ip)) // 0: Success, else error
{
// On error, the errcode/infocode explain why - values
// are detailed in z/OS Authorized Services Reference
printf("SVC99: Can't allocate %s - RC 0x%x, Info 0x%x\n",
dsn, ip.__errcode, ip.__infocode);
return FALSE;
}
// If dynalloc works, you can open the file with fopen("DD:ddname",...)
}
Don't forget that when you're done with the file, you generally need to deallocate it.
The code snippet above uses "FREE=CLOSE" - this means that when the file is closed, z/OS will automatically free the allocation...if you only open and process the dataset once, this is a convenient approach. If you need to repeatedly open and close the file, then you wouldn't use FREE=CLOSE, but instead call dynamic allocation a second time after you're done with your processing and want to free the file.
If you need to concurrently access multiple files, beware that you'll need to generate multiple unique DDNAMEs. You can either do this in your own code, or you can use the form of dynamic allocation that automatically builds and returns a usable DDNAME (of the form "SYSnnnnn").
Also, don't forget that updating a dataset under DISP=SHR can be dangerous in some situations, especially if the dataset involved can be a conventional PDS as well as a PDS/E. The big danger is that two applications open the dataset for output concurrently...both will write data into the same place, and the result will likely be a damaged PDS directory.
There are some other oddities in the UNIX Services environment, particularly if you use fork() or exec() and expect file handles to work in subprocesses since allocations are generally tied to a particular z/OS address space. Services like spawn() can let the child process run in the same address space, so this is one possibility.

Related

Is there a difference in performance between set/save when saving columns to tables?

I have a small utility that checks for new columns for an intraday hdb and adds new columns.
At the moment I am using :
.[set;(pth;?[data;();();cls]);{[p;e] .log.error[.z.h;"Failed to save to path [",string[p],"] with error :",e]}[pth;]]
where path is :
`:path_to_hdb/2022.03.31/table01/newDummyThree
and
?[data;();();cls] // just an exec statement
Would it make any difference to use save instead:
.[save;(pth;?[data;();();cls]);{[p;e] .log.error[.z.h;"Failed to save to path [",string[p],"] with error :",e]}[pth;]]
Yes. If you are adding entire columns to a table then you might want to store it splayed, i.e. as a directory of column files rather than as a single table file. This means using set rather than save.
https://code.kx.com/q/kb/splayed-tables/
But test actual example updates.
As mentioned in the documentation for save:
Use set instead to save
a variable to a file of a different name
local data
So set has the advantage of not requiring a global and you can name the file a different name to the name of your in-memory global variable.
There is no difference in how they serialise/write the data. In fact, save uses set under the covers anyway:
q)save
k){$[1=#p:`\:*|`\:x:-1!x;set[x;. *p]; x 0:.h.tx[p 1]#.*p]}'
By the way - you can't use save in the way that you've suggested in your post. save takes a symbol as input and this symbol is the symbol name of your global variable containing the data you want to write.

How to read info on voltage/beam energy, imaging mode, acquisition date/timestamp, etc. from image meta-data? (Tags)

DM scripting beginner here, almost no programming skills.
I would like to know the commands to access all the metadata of DM images/spectra.
I realized that all my STEM images at 80 kV taken between 2 dates (let's say 02.11.2017-05.04.2019) have the scale calibration wrong by the same factor (scale of all such images needs to be multiplied by 1.21).
I would like to write a script which multiplies the scale value by a factor only for images in scanning mode at 80 kV taken during a period for all images in a folder with subfolders or for all images opened in DM and save the new scale value.
I checked this website http://digitalmicrograph-scripting.tavernmaker.de/other%20resources/Old-DMHelp/AllFunctions.html but only found how to call the scale value (ImageGetDimensionCalibration). I have a general idea how to write the script based on other scripts if I find out how to call the metadata.
If anyone can write the whole script for me I would greatly appreciate your effort.
All general meta-data is organized in the image tag-structure
You can see this, if you open the Image Display Info of an image. (Via the menu, or by pressing CTRL + D) and then browse to the "Tags" section:
All info on the right are image tags and they are organized in a hierarchical tree.
How this tree looks like, and what information is written where, is totally open and will depend on what GMS version you are using, how the hardware is configured etc. Also custom scripts might alter this information.
So for a scripting start, open the data you want to modify and have a look in this tree.
Hint: The following min-script can be useful. It opens a tag-browsing window for the front-most image but as a modeless dialog (i.e. you can keep it open and interact with other parts):
GetFrontImage().ImageGetTagGroup().TagGroupOpenBrowserWindow(0)
The information you need to check against is most probably found in the Microscope Info sub-tree. Here, usually all information gathered from the microscope during acquisition is stored. What is there, will depend on your system and how it is set up.
The information of the STEM image acquisition - as far as the scanning engine and detector is concerned - is most probably in the DigiScan sub-tree.
The Data Bar sub-tree usually contains date and time of creation etc.
Calibration values are not stored in the image tag-structure
What you will not find in this tag-structure is the image calibration, i.e. the values actually used by DM to display calibrated values. These values are "one level up" so to speak here:
This is important to know in the following for your script, because you will need different commands for both the "meta-data" from the tags, and the "calibration" you want to change.
Accessing meta-data by script
The script-commands you need to read from the tags are all described in the F1 help documentation here:
Essentially, you need a command to get the "root" TagGroup of an image, which is ImageGetTagGroup() and then you traverse within this tree.
This might seem confusing - because there are a lot of slightly different commands for the different types of stored tags - but the essential bits are easy:
All "Paths" through the tree are just the individual names (typed exactly)
For each "branch" you have to use a single colon :
The commands to set/get a tag-value all require as input the "root" tagGroup object and the "path" as a string. The get commands require a variable of matching type to store the value in, the set commands need the value which should be written.
= The get commands themeselves return true or false depending on whether or not a tag-path could be found and the value could be read.
So the following script would read the "Imaging Mode" from the tags of the image shown as example above:
string mode
GetFrontImage().ImageGetTagGroup().TagGroupGetTagAsString( "Microscope Info:Imaging Mode", mode )
OKDialog( "Mode: " + mode )
and in a little more verbose form:
string mode // variable to hold the value
image img // variable for the image
string path // variable/constant to specify the where
TagGroup tg // variable to hold the "tagGroup" object
img := GetFrontImage() // Use the selected image
tg = img.ImageGetTagGroup() // From the image get the tags (root)
path = "Microscope Info:Imaging Mode" // specify the path
if ( tg.TagGroupGetTagAsString( path, mode ) )
OKDialog( "Mode: " + mode )
else
Throw( "Tag not found" )
If the tag is not a string but a value, you will need the according commands, i.e.
TagGroupGetTagAsNumber().

Matlab loop: ignores variable definition when loading something from directories

I am new to MATLAB and try to run a loop within a loop. I define a variable ID beforehand, for example ID={'100'}. In my loop, I then want to go to the ID's directory and then load the matfile in there. However, whenever I load the matfile, suddenly the ID definition gets overridden by all possible IDs (all folders in the directory where also the ID 100 is). Here is my code - I also tried fullfile, but no luck so far:
ID={'100'}
for subno=1:length(ID) % first loop
try
for sessno=1:length(Session) % second loop, for each ID there are two sessions
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
cd(['C:\' ID{subno} '\' Session{sessno}]);
load(subj_vec_name) % the problem occurs here, when loading, not before
end
end
end
When I then check the length of the IDs, it is now not 1 (one ID, namely 100), but there are all possible IDs within the directory where 100 also lies, and the loop iterates again for all other possible IDs (although it should stop after ID 100).
You should always specify an output to load to prevent over-writing variables in your workspaces and polluting your workspace with all of the contents of the file. There is an extensive discussion of some of the strange potential side-effects of not doing this here.
Chances are, you have a variable named ID inside of the .mat file and it over-writes the ID variable in your workspace. This can be avoided using the output of load.
The output to load will contain a struct which can be used to access your data.
data = load(subj_vec_name);
%// Access variables from file
id_from_file = data.ID;
%// Can still access ID from your workspace!
ID
Side Note
It is generally not ideal to change directories to access data. This is because if a user runs your script, they may start in a directory they want to be in but when your program returns, it dumps them somewhere unexpected.
Instead, you can use fullfile to construct a path to the file and not have to change folders. This also allows your code to work on both *nix and Windows systems.
subj_name = ([ID{subno} '_' Session{sessno} '_vectors_SC.mat']);
%// Construct the file path
filepath = fullfile('C:', ID{subno}, Session{sessno}, subj_name);
%// Load the data without changing directories
data = load(filepath);
With the command load(subj_vec_name) you are loading the complete mat file located there. If this mat file contains the variable "ID" it will overwrite your initial ID.
This should not cause your outer for-loop to execute more than once. The vector 1:length(ID) is created by the initial for-loop and should not get overwritten by subsequent changes to length(ID).
If you insert disp(ID) before and after the load command and post the output it might be easier to help.

IDCAMS LISTCAT deleting VSAM file when next step is IEFBR14

I have a requirement wherein I need to check if a VSAM file exists or not. If it is not present then I need to create it like TEST.FILE2. My JCL is as :
//STEP01 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT ENTRIES('BRTEST.FILE1')
/*
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(,CATLG,DELETE),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
But a stange thing is happening. Whenever I execute this JCL, STEP001 return a return code as 004 even though the file is already present, and a new file is created in STEP02. So if I submit this JCL twice, a new file is created both the times. I am not able to understand how the file is getting deleted. And the strange thing is if I run the JCL without STEP02 then it gives MAXCC as 0 saying that the file was found in catalog.
I was able to achieve my requirement by following code, but would still like to understand why and how my VSAM file gets deleted for LISTCAT.
//STEP02 EXEC PGM=IEFBR14,COND=(4,GT)
//DD01 DD DSN=BRTEST.FILE1,
// DISP=(MOD,CATLG,CATLG),
// LIKE=BRTEST.FILE2
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
Here is the SYSPRINT when only STEP01 is executed:
IDCAMS SYSTEM SERVICES TIME: 03:47:44
LISTCAT ENTRIES('BRTEST.FILE1')
CLUSTER ------- BRTEST.FILE1
IN-CAT --- CATALOG.TEST03
DATA ------- BRTEST.FILE1.DATA
IN-CAT --- CATALOG.TEST03
INDEX ------ BRTEST.FILE1.INDEX
IN-CAT --- CATALOG.TEST03
IDCAMS SYSTEM SERVICES TIME: 03:47:44
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------1
DATA ------------------1
GDG -------------------0
INDEX -----------------1
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------3
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 0
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 0
And when both steps are executed:
IDCAMS SYSTEM SERVICES TIME: 03:48:35
LISTCAT ENTRIES('BRTEST.FILE1')
IDC3012I ENTRY BRTEST.FILE1 NOT FOUND
IDC3009I ** VSAM CATALOG RETURN CODE IS 8 - REASON CODE IS IGG0CLEG-42
IDC1566I ** BRTEST.FILE1 NOT LISTED
IDCAMS SYSTEM SERVICES TIME: 03:48:35
THE NUMBER OF ENTRIES PROCESSED WAS:
AIX -------------------0
ALIAS -----------------0
CLUSTER ---------------0
DATA ------------------0
GDG -------------------0
INDEX -----------------0
NONVSAM ---------------0
PAGESPACE -------------0
PATH ------------------0
SPACE -----------------0
USERCATALOG -----------0
TAPELIBRARY -----------0
TAPEVOLUME ------------0
TOTAL -----------------0
THE NUMBER OF PROTECTED ENTRIES SUPPRESSED WAS 0
IDC0001I FUNCTION COMPLETED, HIGHEST CONDITION CODE WAS 4
IDC0002I IDCAMS PROCESSING COMPLETE. MAXIMUM CONDITION CODE WAS 4
The value for ZOS390RL variable is z/OS 02.01.00 and ZENVIR is ISPF 7.1MVS TSO.
May have an answer for you. Didn't think of it because it is a VSAM dataset, and the way you are trying to do it is unusual (to me).
There is/was a product called UCC11. I think it is now marketed by Computer Associates, and called CA-11 (or somesuch). I think you are using this product or something similar at your site.
If executed at the beginning of a JOB it would look for files specified as NEW and CATLG, and look to see if there was an existing file of the same name in the catalog. If there was, the existing file would be deleted.
This would obviate the need for an initial IEFBR14 step to delete such files.
I think that you are using this product, or something similar. Your file is being automatically deleted when it exists, so your IDCAMS step, which is reading data from a file (even if it is SYSIN and DD *) is not known to the product, so your VSAM file is deleted before your IDCAMS step is run.
Changing the file to MOD as the initial disposition (MOD will add to an existing file and create a new file if none exists) will not cause such a product to delete the file.
Using the LIKE for a VSAM file will not obtain the CA-size and CI-SIZE from the model dataset. You will get default values for those, which may well impact on the performance of your programs. You cannot specify these values when defining a VSAM file in JCL. You also won't get buffer values from the model dataset, but you can specify those separately in the JCL (which you haven't).
Here is a description of what LIKE does for you: http://publibfp.dhe.ibm.com/cgi-bin/bookmgr/BOOKS/iea2b680/12.40?DT=20080604022956
The following attributes are copied from the model data set to the
new data set:
Data set organization
Record organization (RECORG) or
Record format (RECFM)
Record length (LRECL)
Key length (KEYLEN)
Key offset (KEYOFF)
Type, PDS, PDSE, basic format, extended format, large format, or HFS (DSNTYPE)
Space allocation (AVGREC and SPACE)
Unless you explicitly code the SPACE parameter for the new data set,
the system determines the space to be allocated for the new data
set by adding up the space allocated in the first three extents of the
model data set. Therefore, the space allocated for the new data set
will generally not match the space that was specified for the model
data set. Note that regardless of the units in which the model data
set was allocated, the new data set will be allocated in tracks. This
assumes that space was not specified on the JCL and is being picked up
from the model data set.
There are some other little "gotchas", like in the last paragraph, detailed in the link as well.
Unless you have strong reasons otherwise, I'd strongly suggest doing the whole thing in one IDCAMS step (as below).
I suspected it was going to be 1.12, 1.13 or 2.1 (2.01). IEFBR14 is, subtly, part of the OS now.
Exactly why you get this effect, I don't know. I don't have access to 2.1, so can't investigate myself.
IEFBR14 has changed, LIKE is not really intended for VSAM datasets (you'll get a lot of default values for things you may or may not want), it's not really a "usual" way to do this. See Suggestions below.
Try adding a DDname to your IDCAMS step which just references the VSAM dataset. See if that changes anything. Use that DDname in an IDCAMS statement. See if that changes anything.
Take all your results to your Sysprogs, and see if they can spot anything.
If not, it'll be PMR-time: http://www-01.ibm.com/support/docview.wss?uid=swg21507639
If you do raise a PMR, please update by adding an Answer with the resolution once you receive it.
Suggestions.
Find out how this task is done by other people at your site.
Have you tried using the VSAM file you have defined in that way? You should LISTCAT TEST.FILE1 and TEST.FILE2 and compare. If you look up LIKE in the JCL Reference, you will see that there are things which a VSAM DEFINE can do which you can't do for a VSAM file defined in the JCL using LIKE.
Unless there is some reason otherwise, I'd suggest you do the whole thing in one step with IDCAMS. See if the file exists, use IDCAM's IF to to test the CC from that and only DEFINE if the file does not exist. You can use a MODEL (for instance on your TEST.FILE2) to get everything which is similar to another file, and just override anything different that you need.
If you have a look here, http://pic.dhe.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.zos.r13.idai200%2Fdefclu.htm, you will find a number of Modal Commands for IDCAMS which will give you everything you need to define if it is not there, and do something different (set the Condition Code, for instance) if it is.
Please still supply the requested information. It is an interesting question on the face of it, which may have a simple solution. But even with a solution, I don't think it is what you want.
What you want to do can be done entirely in the IDCAMS step. You can inspect the return code from a previous operation (ie, the LISTCAT) and do something (like define a new cluster) if the code is greater than 0. If the 2nd step works, then set the MAXCC to return 0 to tell your JCL that this step completed OK (and to let your Ops folks know this too).
Look for the IDCAMS 'IF'.

Intersystems Cache - Maintaining Object Code to ensure Data is Compliant with Object Definition

I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.