Assigning a whole DataStructure its nullind array - db2

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1

First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.

for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.

One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';

The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

Related

Access locally scoped variables from within a string using parse or value (KDB / Q)

The following lines of Q code all throw an error, because when the statement "local" is parsed, the local variable is not in the correct scope.
{local:1; value "local"}[]
{[local]; value "local"}[1]
{local:1; eval parse "local"}[]
{[local]; eval parse "local"}[1]
Is there a way to reach the local variable from inside the parsed string?
Note: This is a simplification of the actual problem I'm grappling with, which is to write a function that executes a query, accepting a list of columns which it should return. I imagine the finished product looking something like this:
getData:{[requiredColumns, condition]
value "select ",(", " sv string[requiredColumns])," from myTable where someCol=condition"
}
The condition parameter in this query is the one that isn’t recognised and I do realise I could append it’s value rather than reference it inside a string, but the real query uses lots of local variables including tables etc, so it’s not as easy as just pulling all the variables out of the string before calling value on it.
I'm new to KDB and Q, so if anyone has a better way to achieve the same effect I'm happy to be schooled on the proper way to achieve this outcome in Q. Would still be interested to know in the variable access thing is possible though.
In the first example, you are right that local is not within the correct scope, as value is looking for the global variable local.
One way to get around this is to use a namespace, which will define the variable globally, but can only be accessed by calling that namespace. In the modified example below I have defined local in the .ns namespace
{.ns.local:1; value ".ns.local"}[]
For the problem you are facing with selecting, if requiredColumns is a symbol list of columns you can just use the take operator # to select them.
getData:{[requiredColumns] requiredColumns#myTable}
For more advanced queries using variables you may have to use functional select form, explained here. This will allow you to include variables in the where and by clause of the select statement
The same example in functional form would be (no by clause, only select and where):
getData:{[requiredColumns;condition] requiredColumns:(), requiredColumns;
?[myTable;enlist (=;`someCol;condition);0b;requiredColumns!requiredColumns]}
The first line ensures that requiredColumns is a list even if the user enters a single column name
value will look for a variable in the global scope that's why you are getting an error. You can directly use local variables like you are doing that in your function.
Your function is mostly correct, just need a slight correction to append condition(I have mentioned that below). However, a better approach would be to use functional select in this case.
Using functional select:
q) t:([]id:`a`b; val:3 4)
q) gd: {?[`t;enlist (=;`val;y);0b;((),x)!(),x]}
q) gd[`id;3] / for single column
Output:
id
-
1
q) gd[`id`val;3] / for multiple columns
In case your condition column is of type symbol, then enlist your condition value like:
q) gd: {?[`t;enlist (=;`id;y);0b;((),x)!(),x]}
q) gd[`id;enlist `a]
You can use parse to get a functional form of qsql queries:
q) parse " select id,val from t where id=`a"
?
`t
,,(=;`id;,`a)
0b
`id`val!`id`val
Using String concat(your function):
q)getData:{[requiredColumns;condition] value "select ",(", " sv string[requiredColumns])," from t where id=", .Q.s1 condition}
q) getData[enlist `id;`a] / for single column
q) getData[`id`val;`a] / for multi columns

Erlang mnesia equivalent of "select * from Tb"

I'm a total erlang noob and I just want to see what's in a particular table I have. I want to just "select *" from a particular table to start with. The examples I'm seeing, such as the official documentation, all have column restrictions which I don't really want. I don't really know how to form the MatchHead or Guard to match anything (aka "*").
A very simple primer on how to just get everything out of a table would be very appreciated!
For example, you can use qlc:
F = fun() ->
Q = qlc:q([R || R <- mnesia:table(foo)]),
qlc:e(Q)
end,
mnesia:transaction(F).
The simplest way to do it is probably mnesia:dirty_match_object:
mnesia:dirty_match_object(foo, #foo{_ = '_'}).
That is, match everything in the table foo that is a foo record, regardless of the values of the fields (every field is '_', i.e. wildcard). Note that since it uses record construction syntax, it will only work in a module where you have included the record definition, or in the shell after evaluating rr(my_module) to make the record definition available.
(I expected mnesia:dirty_match_object(foo, '_') to work, but that fails with a bad_type error.)
To do it with select, call it like this:
mnesia:dirty_select(foo, [{'_', [], ['$_']}]).
Here, MatchHead is _, i.e. match anything. The guards are [], an empty list, i.e. no extra limitations. The result spec is ['$_'], i.e. return the entire record. For more information about match specs, see the match specifications chapter of the ERTS user guide.
If an expression is too deep and gets printed with ... in the shell, you can ask the shell to print the entire thing by evaluating rp(EXPRESSION). EXPRESSION can either be the function call once again, or v(-1) for the value returned by the previous expression, or v(42) for the value returned by the expression preceded by the shell prompt 42>.

How can I do breaks and subtotals in a report?

I need to generate a business report using perl + Template Toookit and LaTeX.
Things are working really well, but I am struggling with the problem of having breaks (for example page breaks, or special headers) and subtotals whenever a field changes.
So, for example, every time the field "category" changes, I'd need to have a total of sales for that category, and a header showing that another category listing is starting; and then do the same when the field "group" - with the added interest that "group" is made up of categories, so the two things should nest.
I guess anyone that has built reports with Microsoft Access (or probably any other business reporting application) should be familiar with the problem.
Ideally this would be solved at a meta-level, so I don't have to rebuild the code every time, but only to specify what fields should generate breaks or subtotals.
I am (voluntarily) constrained to LaTeX and TT: LaTeX because of the control it gives over typography, and the possibility of generating custom graphics, and TT (or anything else that works in perl) because of learning curves.
There's no built-in subtotaling feature in TT, but you could possibly put your data into a Data::Table object, that would give you some ability to handle subtotaling at the 'meta' level, as you say.
Depending on the number of columns involved though, it might be just as simple to create local hashes to maintain running totals: NB: untested, example code only
[%-
MACRO printrow(rowtype, line) BLOCK;
# however you print the row as LaTeX
# rowtype is 'row', 'subtotal' or 'grandtotal' for formatting purposes
END;
SET sumcols = [ 'col3', 'col4', 'col5' ]; # cols to be accumulated
SET s_tot = {}; SET g_tot = {};
FOREACH i IN sumcols;
SET s_tot.$i = 0; # initialise
SET g_tot.$i = 0;
END;
FOREACH row IN data;
IF s_tot.col2 AND s_tot.col2 <> row.col2; # start of new group
printrow('subtotal', s_tot);
FOREACH i IN sumcols;
SET s_tot.$i = 0; #re-init
END;
END;
printrow('row', row);
SET s_tot.col2 = row.col2; # keep track of group level
FOREACH i IN sumcols;
SET s_tot.$i = s_tot.$i + row.$i;
SET g_tot.$i = g_tot.$i + row.$i;
END;
END;
printrow('grandtotal', g_tot);
-%]
Of course, if you have more than a couple of grouping levels, this can get quite messy. You could make s_tot an array of hashes to manage each level, to avoid hard-coding the levels. That's left as an exercise for the reader, as they say.

SQL -302 on OPEN cursor in SQLRPGLE

The problem
I've got a SQLRPGLE program that executes queries that look like this:
SELECT orapdt, oraptm, orodr#, c.ccctls, orbill, b.cuslmn, b.cusvrp, orocty, orost, o.cubzip, o.cucnty, ordcty, ordst, d.cubzip, d.cucnty
FROM order
LEFT JOIN cmtctlf c ON orbill = c.cccode
LEFT JOIN custmast b ON orbill = b.cucode
LEFT JOIN custmast o ON orldat = o.cucode
LEFT JOIN custmast d ON orcons = d.cucode
WHERE
orstat != 'C' AND
orbill IN ('ABCDE', 'VWXYZ', 'JKFRTE') AND
orapdt BETWEEN 2012365 AND 2013362 AND
o.cucnty = 'USA' AND
(o.cubzip LIKE '760%' OR o.cubzip LIKE '761%' OR o.cubzip LIKE '762%') AND
d.cubzip = '38652' AND
ordcty = 'NA' AND
ordst = 'MS' AND
d.cucnty = 'USA'
ORDER BY orapdt, oraptm, orodr#
Field definitions:
orapdt 7 0
oraptm 4a
orodr# 7a
c.ccctls 6a
orbill 6a
b.cuslmn 2a
b.cusvrp 3a
orocty 4a
orost 2a
o.cubzip 5a
o.cucnty 3a
ordcty 4a
ordst 2a
d.cubzip 5a
d.cucnty 3a
c.cccode 6a
b.cucode 6a
o.cucode 6a
d.cucode 6a
I see the following errors in my job log:
Field HVR0001 and value 1 not compatible. Reason 7.
Conversion error on host variable or parameter *N.
When I prompt for additional message information I'm told:
The attributes of variable field HVR0001 in query record format FORMAT0001 are not compatible with the attributes of value number 1. The value is *N. The reason code is 7.
7 -- Value contains numeric data that is not valid
and
Host variable or parameter *N or entry 1 in a descriptor area contains a value that cannot be converted to the attributes required by the statement. Error type 6 occurred.
6 -- Numeric data that is not valid.
These errors are triggered by opening the cursor:
...
exec sql PREPARE S1 FROM :sql_stmt;
exec sql DECLARE C1 SCROLL CURSOR FOR S1;
exec sql OPEN C1;
...
I also have QSQSVCDMP files in my outq filled with dump information. The only useful thing I see in there is a reference to CPF4278 and CPD4374
CPF4278 means Query definition template &1 not valid.
CPD4374 means Field &1 and value &3 not compatible. Reason &5.
Unfortunately the error message itself isn't there, only the strings "CPF4278" and "CPD4374".
In the program I monitor for SQL error codes and they are all the same:
SQLSTATE: 22023
SQLCODE: -302
SQLERRMC: <non-displayable character>*N
The error state/code means "A parameter or variable value is invalid."
What I've tried...
After much Googling I've tried:
removing the ORDER BY clause (on OPEN, data is fetched and
ordered when there is an ORDER BY clause)
changing all LEFT JOIN's to INNER JOIN's (did this to make sure there were no NULL's
in the result records from the right side)
adding " AND orapdt IS NOT NULL" to the WHERE clause
many more things that I've forgotten
What I'm asking...
How do I find out which field has bad data in it? I know that HVR0001 is invalid but which field is represented by HVR0001? I tried SELECTing fields in a different order but it's always HVR0001 that has an invalid value.
Ideally I'd like to be able to print out all HVR* fields/values so I can inspect them.
When I look at the compile listing there are no HVR* fields listed. There are some SQL_* fields listed and I can see that SQL_00011 is used to temporarily hold data that gets put into orapdt. SQL_00011 is defined exactly like orapdt (7,0 packed). That's the only numeric field in my query...
I feel like my problem is being caused by how the files are being joined, that somehow an invalid value (probably NULL) is being placed into my orapdt field.
I also think my problem has something to do with executing many of these queries one after the other (some of the WHERE specifics change for each query) because I can take one of the queries that fail and put it into it's own program and run it and it works fine.
This is on DB2 for i (V6R1) and all files involved were created using DDS
Edit:
Here is the host variable (data structure) and the two external data structures needed for the LIKE statements:
d eds_custmast e ds extname('CUSTMAST') inz
d eds_order e ds extname('ORDER') inz
d o ds
d orapdt like(ORAPDT)
d oraptm like(ORAPTM)
d orodr# like(ORODR#)
d orctls like(CUCODE)
d orbill like(ORBILL)
d orslmn like(CUSLMN)
d orcsr like(CUSVRP)
d orocty like(OROCTY)
d orost like(OROST)
d orozip like(CUBZIP)
d orocntry like(CUCNTY)
d ordcty like(ORDCTY)
d ordst like(ORDST)
d ordzip like(CUBZIP)
d ordcntry like(CUCNTY)
// Define an array to indicate nulls...
d o1nv s 3i 0 dim(15)
And here's the fetch statement that actually gets the data:
dow sqlcode = *zeros;
exec sql FETCH NEXT FROM C1 INTO :o :o1nv;
if sqlcode = *zeros;
// process the data.
endif;
enddo;
exec sql CLOSE C1;
I didn't include this before simply because the error occurs when I'm OPENing the cursor, not FETCHing a row. The OPEN statement shouldn't know anything about the o data structure.
As for what changes in the WHERE clause - all of it is dynamically built (and thus can change) other than:
orstat != 'C' AND orapdt BETWEEN 2012365 AND 2013362
It's not at all easy to find out what the actual error is. I tend to copy statements like these into IBM i Navigator and use Visual Explain to try to get a grasp of what decisions the optimiser is making. Another way to do this is to do a STRDBG and look at the job log. When STRDBG is in effect, the optimiser puts informational messages into the job log. But even then, it cam be tough to puzzle out.
In this case, there's only one numeric column, orapdt. Try the query without that column and see if that's the culprit.
Since ORAPDT is your only numeric column, so the problem must lie there.
The issue is in the way DDS defined files work. The validity of values is not checked when being written into DDS defined files, so it appears you have non-numeric data in ORAPDT on one or more records. SQL does not like this, and throws an error.
SQL (DDL) defined tables validate the values before they are written, thus protecting the integrity of your database better.
To solve your problem, find the offending record(s) and fix them or delete them.
Assuming error comes from orapdt, you could monitor it by creating new varible or replacing null or garbage values with other number e.g. null = 9999999, non-numeric = 8888888
SELECT case when orapdt is null
then 9999999
when TRANSLATE(SUBSTR(orapdt,1,LENGTH(orapdt)-1),' ','0123456789',' ') <>' '
then 8888888
else orapdt
end
, oraptm,
or check thru strsql or run sql script for offending records
SELECT orapdt, oraptm, orodr#,
...
WHERE ( orapdt is null or TRANSLATE(SUBSTR(orapdt,1,LENGTH(orapdt)-1),' ','0123456789',' ') <>' ' ) AND
orstat != 'C' AND
......
What seems to be the issue...
The code I posted in my question is in program A. Program A calls (via CALLP) program B. Nothing out of the ordinary there.
Program A uses embedded SQL declaring a prepared statement called S1 and a scrollable cursor called C1. Program B also happens to declare a prepared statement called S1 and a scrollable cursor called C1.
What appears to be happening is the cursor's are interfering with each other because they have the same name. My belief is the query being executed in program B is fetching data that is valid for itself – but is invalid for the query defined in program A. So when program A scrolls through the results of it's query and calls program B the query executed by program B attempts to put invalid values in fields associated with program A – and this only happens when the cursor names are the same in both programs.
All I did was give the cursors in both programs unique names (PGMA_C1 and PGMB_C1 for instance) and the errors stopped happening. Nothing else changed, just the cursor names. This goes against the information I found here (http://pic.dhe.ibm.com/infocenter/iseries/v6r1m0/index.jsp?topic=/rzala/rzalaccl.htm)
“Scope of a cursor: The scope of cursor-name is the source program in which it is defined; that is, the program submitted to the precompiler. Thus, a cursor can only be referenced by statements that are precompiled with the cursor declaration. For example, a program called from another separately compiled program cannot use a cursor that was opened by the calling program.”
Of course that statement seems to be contradicted by this one:
A cursor can only be referred to in the same instance of the program
in the program stack unless CLOSQLCSR(*ENDJOB), CLOSQLCSR(*ENDSQL), or
CLOSQLCSR(*ENDACTGRP) is specified on the CRTSQLxxx commands.
If CLOSQLCSR(*ENDJOB) is specified, the cursor can be referred to by any instance of the program on the program stack.
If CLOSQLCSR(*ENDSQL) is specified, the cursor can be referred to by any instance of the program on the program stack until the last
SQL program on the program stack ends.
If CLOSQLCSR(*ENDACTGRP) is specified, the cursor can be referred to by all instances of the module in the activation group until the
activation group ends.
But in our case both program A and B have CLOSQLCSR(*ENDMOD) – so the two cursors shouldn't be aware of each other.
Unfortunately I don't have the time to dig into this any deeper. I have confirmed that simply giving each program a unique cursor name solves our problem.
Before I figured out that using unique cursor names would fix our problem I did comprehensive testing of all our data. Every field in every record in every file used by these two programs contains valid data. Based on the error message I was expecting there to be a NULL or some other invalid character somewhere but that wasn't the case.
I appreciate your replies and suggestions, +1 all around :-)

Intersystems Cache - Maintaining Object Code to ensure Data is Compliant with Object Definition

I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.