very large fields in As400 ISeries database - db2

I would like to save a large XML string (possibly longer than 32K or 64K) into an AS400 file field. Either DDS or SQL files would be OK. Example of SQL file below.
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (70K ) ALLOCATE(1000) NOT NULL WITH DEFAULT)
We would use RPGLE to read and write to fields.
The goal is to then pull out data via ODBC connection on a client side.
AS400 character fields seem to have 32K limit, so this is not great option.
What options do I have? I have been reading up on CLOBs but there appear to be restrictions writing large strings to CLOBS and reading CLOB field remotely. Note that client is (still) on v5R4 of AS400 OS.
thanks!
Charles' answer below shows how to extract data. I would like to insert data. This code runs, but throws a '22501' SQL error.
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
//eval longdesc = *ALL'123';
eval Wlongdesc = '123';
exec SQL
INSERT INTO PRODUCT (PRODCODE, PRODDESC, LONGDESC)
VALUES (123, 'Product Description', :LongDesc );
if %subst(sqlstt:1:2) <> '00';
// an error occurred.
endif;
// get length explicitly, variables are setup by pre-processor
longdesc_len = %len(%trim(longdesc_data));
wLongDesc = %subst(longdesc_data:1:longdesc_len);
/end-free
C Eval *INLR = *on
C Return
Additional question: Is this technique suitable for storing data which I want to extract via ODBC connection later? Does ODBC read CLOB as pointer or can it pull out text?

At v5r4, RPGLE actually supports 64K character variables.
However, the DB is limited to 32K for regular char/varchar fields.
You'd need to use a CLOB for anything bigger than 32K.
If you can live with 64K (or so )
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
You can use RPGLE SQLTYPE support
D code S 5s 0
d wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
exec SQL
select prodcode, longdesc
into :code, :longdesc
from mylib/product
where prodcode = :mykey;
wLongDesc = %substr(longdesc_data:1:longdesc_len);
DoSomthing(wLongDesc);
The pre-compiler will replace longdesc with a DS defined like so:
D longdesc ds
D longdesc_len 10u 0
D longdesc_data 65531a
You could simply use it directly, making sure to only use up to longdesc_len or covert it to a VARYING as I've done above.
If absolutely must handle larger than 64K...
Upgrade to a supported version of the OS (16MB variables supported)
Access the CLOB contents via an IFS file using a file reference
Option 2 is one I've never seen used....and I can't find any examples. Just saw it mentioned in this old article..
http://www.ibmsystemsmag.com/ibmi/developer/general/BLOBs,-CLOBs-and-RPG/?page=2

This example shows how to write to a CLOB field in Db2 database... with help from Charles and Mr Murphy's feedback.
* ----------------------------------------------------------------------
* Create table with CLOB:
* CREATE TABLE MYLIB/PRODUCT
* (MYDEC DEC (5 ) NOT NULL WITH DEFAULT,
* MYCHAR CHAR (30 ) NOT NULL WITH DEFAULT,
* MYCLOB CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
* ----------------------------------------------------------------------
D PRODCODE S 5i 0
D PRODDESC S 30a
D i S 10i 0
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
D* Note that variables longdesc_data and longdesc_len
D* get create automatocally by SQL pre-processor.
/free
eval wLongdesc = '123';
longdesc_data = wLongDesc;
longdesc_len = %len(%trim(wLongDesc));
exec SQL set option commit = *none;
exec SQL
INSERT INTO PRODUCT (MYDEC, MYCHAR, MYCLOB)
VALUES (123, 'Product Description',:longDesc);
if %subst(sqlstt:1:2)<>'00' ;
// an error occurred.
endif;
Eval *INLR = *on;
Return;
/end-free

Related

How to use IIf function inside IN Clause?

I have a stored procedure as under
ALTER PROCEDURE [dbo].[usp_testProc]
(#Product NVARCHAR(200) = '',
#BOMBucket NVARCHAR(100) = '')
--exec usp_testProc '','1'
AS
BEGIN
SELECT *
FROM Mytbl x
WHERE 1 = 1
AND (#Product IS NULL OR #Product = '' OR x.PRODUCT = #Product)
AND (#BOMBucket IS NULL OR #BOMBucket = '' OR CAST(x.BOMBucket AS NVARCHAR(100)) IN (IIF(#BOMBucket != '12+', #BOMBucket, '13,14')))
END
Everything else if working fine except when I am passing the bucket value as 12+ . It should ideally show the result for bucket 13 and 14. But the result set is blank.
I know IN expects values as ('13','14'). But somehow not able to fit it in the program.
You can express that logic in a Boolean expression.
...
(#BOMBucket <> '12+'
AND cast(x.BOMBucket AS nvarchar(100)) = #BOMBucket
OR #BOMBucket = '12+'
AND cast(x.BOMBucket AS nvarchar(100)) IN ('13', '14'))
...
But casting the column prevents indexes from being used. You rather should cast the other operand. Like in:
x.BOMBucket = cast(#BOMBucket AS integer)
But then you had the problem, that the input must not be a string representing an inter but can be any string. this would cause an error when casting. In newer SQL Server versions you could circumvent that by using try_cast() but not in 2012 as far as I know. Maybe you should rethink your approach overall and pass a table variable with the wanted BOMBucket as integers instead.

How to use CASE clause (DB2) to display values from a different table?

I'm working in a bank so I had to adjust the column names and information in the query to fit the external web, so if there're any weird mistakes know it is somewhat fine.
I'm trying to use the CASE clause to display data from a different table, I know this is a workaround but due to certain circumstances I'm obligated to use it, plus it is becoming interesting to figure out if there's an actual solution.
The error I'm receiving for the following query is:
"ERROR [21000] [IBM][CLI Driver][DB2] SQL0811N The result of a scalar
fullselect, SELECT INTO statement, or VALUES INTO statement is more
than one row."
select bank_num, branch_num, account_num, client_id,
CASE
WHEN exists(
select *
from bank.services BS
where ACCS.client_id= BS.sifrur_lakoach
)
THEN (select username from bank.services BS where BS.client_id = ACCS.client_id)
ELSE 'NONE'
END username_new
from bank.accounts accs
where bank_num = 431 and branch_num = 170
EDIT:
AFAIK we're using DB2 v9.7:
DSN11015 - DB21085I Instance "DB2" uses "64" bits and DB2 code release "SQL09075" with
level identifier "08060107".
Informational tokens are "DB2 v9.7.500.702", "s111017", "IP23287", and Fix Pack "5".
Use listagg function to include all results.
select bank_num, branch_num, account_num, client_id,
CASE
WHEN exists(
select *
from bank.services BS
where ACCS.client_id= BS.sifrur_lakoach
)
THEN (select LISTAGG(username, ', ') from bank.services BS
where BS.client_id = ACCS.client_id)
ELSE 'NONE'
END username_new
from bank.accounts accs
where bank_num = 431 and branch_num = 170

like with nvarchar is not working correctly?

Using Microsoft SQL Server 2012 - 11.0.5058.0 (X64) Express Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
with that creation/fill script :
CREATE TABLE tbl (col CHAR (32) )
insert into tbl values ('test')
Those kind of statements :
declare #var varchar(32) = 'test'
delete from tbl where col like #var
or
delete from tbl where col like 'test'
actually delete the line but why this one :
declare #nvar nvarchar(32) = 'test'
delete from tbl where col like #nvar
do not delete the line ?
According to this page... https://msdn.microsoft.com/en-us/library/ms179859.aspx
When you use Unicode data (nchar or nvarchar data types) with LIKE,
trailing blanks are significant;
Since you are using CHAR data type with length 32, the actual data stored is "test" + 28 spaces.
In your comparison, you are mixing CHAR and nvarchar. Because of the differing data types, SQL Server converts the CHAR data type to NCHAR to perform the comparison.
If you change the data type of the column to VARCHAR, your code works. You could also change your code to:
delete from tbl where col like #nvar + '%'

Fast Array Inserts with Postgres

In Oracle OCI and OCCI there are API facilities to perform array inserts where you build up an array of values in the client and send this array along with a prepared statement to the server to insert thousands of entries into a table in a single shot resulting in huge performance improvements in some scenarios. Is there anything similar in PostgreSQL ?
I am using the stock PostgreSQL C API.
Some pseudo code to illustrate what i have in mind:
stmt = con->prepare("INSERT INTO mytable VALUES ($1, $2, $3)");
pg_c_api_array arr(stmt);
for triplet(a, b, c) in mylongarray:
pg_c_api_variant var = arr.add();
var.bind(1, a);
var.bind(2, b);
var.bind(3, c);
stmt->bindarray(arr);
stmt->exec()
PostgreSQL has similar functionality - statement COPY and COPY API - it is very fast
libpq documentation
char *data = "10\t20\40\n20\t30\t40";
pres = PQexec(pconn, "COPY mytable FROM stdin");
/* can be call repeatedly */
copy_result = PQputCopyData(pconn, data, sizeof(data));
if (copy_result != 1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
if (PQputCopyEnd(pconn, NULL) == -1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
pres = PQgetResult(pconn);
if (PQresultStatus(pres) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Copy to target table failed:%s\n",
PQerrorMessage(pconn));
EXIT;
}
PQclear(pres);
As Pavel Stehule points out, there is the COPY command and, when using libpq in C, associated functions for transmitted the copy data. I haven't used these. I mostly program against PostgreSQL in Python, have have used similar functionality from psycopg2. It's extremely simple:
conn = psycopg2.connect(CONN_STR)
cursor = conn.cursor()
f = open('data.tsv')
cusor.copy_from(f, 'incoming')
f.close()
In fact I've often replaced open with a file-like wrapper object that performs some basic data cleaning first. It's pretty seamless.
I like this way of creating thousands of rows in a single command:
INSERT INTO mytable VALUES (UNNEST($1), UNNEST($2), UNNEST($3));
Bind an array of the values of column 1 to $1, an array of the values of column 2 to $2 etc.! Providing the values in columns may seem a bit strange at first when you are used to thinking in rows.
You need PostgreSQL >= 8.4 for UNNEST or your own function to convert arrays into sets.

Oracle error ORA-01722 while updating DECIMAL value

I'm using ODP to update an Oracle 10g DB with no success updating decimal values.
Ex:
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60.4 WHERE NB = '2143'
Result: 604 in the var column ('.' disappears)
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60,4 WHERE NB = '2143'
Result: INVALID NUMBER
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = ‘60,4’ WHERE NB = '2143'
Result: INVALID NUMBER
I also tried to use TO_NUMBER function without any success.
Any idea on the correct format I should use?
Thanks.
You didn't give us much to go on (only the insert statements, not the casting of types or what not)
but here is a test case that shows the how to do it.
create table numTest(numA number(3) ,
numB number(10,8) ,
numC number(10,2) )
/
--test insert
insert into numTest(numA, numB, numC) values (123, 12.1241, 12.12)
/
select * from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
123 12.1241 12.12
*/
--delete to start clean
rollback
/
/*by marking these table.col%type we can change the table type and not have to worry about changing these in the future!*/
create or replace procedure odpTestNumberInsert(
numA_in IN numTest.numA%type ,
numB_in IN numTest.numB%type ,
numC_in IN numTest.numC%type)
AS
BEGIN
insert into numTest(numA, numB, numC) values (numA_in, numB_in, numC_in) ;
END odpTestNumberInsert ;
/
begin
odpTestNumberInsert(numA_in => 10
,numB_in => 12.55678
,numC_in => 13.13);
odpTestNumberInsert(numA_in => 20
,numB_in => 30.667788
,numC_in => 40.55);
end ;
/
select *
from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
10 12.55678 13.13
20 30.667788 40.55
*/
rollback
/
okay, so we have created a table, got data in it (removed it), created a procedure to verify it works (then rollback the changes) and all looks good. So let's go to the .net side (I'll assume C#)
OracleCommand cmd = new OracleCommand("odpTestNumberInsert", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BindByName = true;
OracleParameter oparam0 = cmd.Parameters.Add("numA_in", OracleDbType.Int64);
oparam0.Value = 5 ;
oparam0.Direction = ParameterDirection.Input;
decimal deciVal = (decimal)55.556677;
OracleParameter oparam1 = cmd.Parameters.Add("numB_in", OracleDbType.Decimal);
oparam1.Value = deciVal ;
oparam1.Direction = ParameterDirection.Input;
OracleParameter oparam2 = cmd.Parameters.Add("numC_in", OracleDbType.Decimal);
oparam2.Value = 55.66 ;
oparam2.Direction = ParameterDirection.Input;
cmd.ExecuteNonQuery ();
con.Close();
con.Dispose();
And then to finish things off:
select *
from numTest
/
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
5 55.556677 55.66
all of our data was inserted.
Without more code on your part I would recommend that you verify that the correct param is being passed in and assoc. to the insert. the above proves it works.
You Should Not re-cast your variables via a TO_NUMBER when you can do so when creating the parameters.
I found the problem just after posting my question !!! I was not looking at the right place... Oracle update was not concerned at all. The problem was in the Decimal.parse method I was using to convert my input string (containing a coma as decimal separator) into the decimal number (with a dot as decimal separatot) I wanted to update in the DB. The thing is that the system culture is not the same on my own development computer than on the client computer, even if they both run in the same country. Then the parse was perfectly working on my computer but was removing the decimal character on the client production environment. I finally just put in place a "replace" coma by dot and everything goes well now. Thanks again for your time.