Query to check whether table is journaled in DB2 - db2

I am new to db2.
Is there a query to check whether a table is journaled in DB2 or not. if it is journaled what is the name of the journal.
I found this query: Find all journals in library MJATST.
SELECT * FROM TABLE (QSYS2.OBJECT_STATISTICS('MJATST ','JRN') ) AS X
but i couldn't find something similar to tables in schema.

I'm not aware of any and a quick search of the likely catalogs didn't turn up a way.
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/db2/rbafzcatalog.htm
The journal information is available in the Retrieve Object Description (QUSROBJD) API
http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_71/apis/qusrobjd.htm
You could wrap that API in a UDF.

The following is an example of a CLLE source that can be used at least as far back as v5r3 [no advantage is taken of newer CL support to make the code more succinct] to create a bound ILE CL *PGM object that is invoked as a scalar function [as was previously] defined to the SQL by the CREATE FUNCTION shown in the block comments preceding the CL source; a very simple test-suite verified the functionality:
/* create function jrnOfDBF */
/* ( table_name varchar(128) */
/* , table_libr varchar( 10) */
/* ) returns char(20) */
/* language PLI -- a lie to allow VARCHAR inputs */
/* specific jrnOfDBF */
/* not deterministic */
/* no sql returns null on null input */
/* disallow parallel not fenced no external action */
/* parameter style SQL */
/* external name jrnOfDBF */
/* */
/* CRTBNDCL PGM(JRNOFDBF) SRCMBR(..following_source..) */
/* */
pgm (&tblnam &tbllib +
&rtnval &rtnind &sqlste &udfnam &specnm &diagmg)
dcl &tblnam *char 130
dcl &tbllib *char 12
dcl &rtnval *char 20
dcl &rtnind *int 2
dcl &sqlste *char 5
dcl &udfnam *char 141
dcl &specnm *char 130
dcl &diagmg *char 72
/* Pgm Vars */
dcl &lngnam *char 128
dcl &lnglib *char 10
dcl &dbflib *char 10
dcl &dbfobj *char 10
dcl &jrnsts *char 1
dcl &jrnlib *char 10
dcl &jrnobj *char 10
dcl &strlen *int 4
dcl &qualnm *char 20
monmsg cpf0000 exec(goto badthing)
main:
chgvar &strlen (%bin(&tbllib 1 2))
chgvar &lnglib (%sst(&tbllib 3 &strlen))
chgvar &strlen (%bin(&tblnam 1 2))
chgvar &lngnam (%sst(&tblnam 3 &strlen))
call qdbrtvsn (&qualnm &lngnam &strlen &lnglib x'0000000000000000')
/* 1 Qualified object name Output Char( 20) */
/* 2 Long object name Input Char(128) */
/* 3 Length of long object name Input Binary(4) */
/* 4 Library name Input Char( 10) */
/* 5 Error code I/O Char( * ) */
chgvar &dbflib (%sst(&qualnm 11 10))
chgvar &dbfobj (%sst(&qualnm 01 10))
rtvobjd &dbflib/&dbfobj *file aspdev(*) +
jrnsts(&jrnsts) jrn(&jrnobj) jrnlib(&jrnlib)
if (&jrnsts *eq '1') then(do)
chgvar &rtnval (&jrnobj *cat &jrnlib) /* qualified name of jrn */
enddo
/* else &rtnval is already blanks */
chgvar &rtnind 0
mainend:
return
badthing:
chgvar &rtnind -1
chgvar &sqlste 'JRN99'
chgvar &diagmg 'Unable to retrieve Obj Info; see joblog'
sndpgmmsg *n cpf9898 qcpfmsg &diagmg tomsgq(*topgmq) topgmq(*prv) +
msgtype(*diag)
goto mainend
endpgm
An example invocation of the function:
select jrnOfDBF('SYSROUTINES', 'QSYS2') from qsys2.qsqptabl
What the interactive Start SQL (STRSQL) display report would show:
....+....1....+....2
JRNOFDBF
QSQJRN QSYS2
******** End of data ********
Note: The value of blanks being returned is indicative of either currently not journaled or never journaled, whereas any non-blank value should be the qualified-name of the journal in the standard form of: '10bytObjNm10bytLibNm'

I used DSPFD command for the file(table).. It specifics bunch of info about the file(table)..
MBJRNL -(N/Y) whether table has journal associated with it
if Y then you can get journal info from following column else this column will be null
MBJRNM - Journal Name
MBJRLB - Journal Schema
hope this helps anyone looking for something similar!!

Related

IBExpert 2021.7.8.1 - Firebird 2.5 database compare stored procedure domain names commented out

I have IBExpert 2021.7.8.1 and Firebird 2.5
When I perform a database compare on stored procedures, my input and output parameters domain names are removed and replaced with the variable representative. On most occasions the domain is commented out.
Here's a snippet of what the update script looks like:
ALTER PROCEDURE WIP_CURRENT_MONTH_PROCEDURE(
IP_SELECTEDYEAR /* SMALLINT_DOMAIN */ SMALLINT,
IP_SELECTEDMONTH /* SMALLINT_DOMAIN */ SMALLINT,
IP_MONTHSDIFFERENCE /* INTEGER_DOMAIN */ INTEGER,
IP_MAX_MONTHS_REMOVED /* INTEGER_DOMAIN */ INTEGER,
IP_EOM_RUN SMALLINT)
RETURNS (
OP_CONTRACTVALUE /* MONETRY_DOMAIN */ NUMERIC(15,2),
OP_ESTIMATED_COSTS /* MONETRY_DOMAIN */ NUMERIC(15,2),
OP_MECHANICAL_VX_2 /* MONETRY_DOMAIN */ NUMERIC(15,2),
Here's a snippet of what the update script should look like:
ALTER PROCEDURE WIP_CURRENT_MONTH_PROCEDURE(
IP_SELECTEDYEAR SMALLINT_DOMAIN,
IP_SELECTEDMONTH SMALLINT_DOMAIN,
IP_MONTHSDIFFERENCE INTEGER_DOMAIN,
IP_MAX_MONTHS_REMOVED INTEGER_DOMAIN,
IP_EOM_RUN SMALLINT_DOMAIN)
RETURNS (
OP_CONTRACTVALUE MONETRY_DOMAIN,
OP_ESTIMATED_COSTS MONETRY_DOMAIN,
OP_MECHANICAL_VX_2 MONETRY_DOMAIN
Any ideas why this occurs for stored procedures?

How to join on JDV and not to push down join to data source

Problem: I am trying to create a wide view (~5000 columns), which works across data sources fine JDV. However, when I try to create the view with a join on 2+ table from data source, the optimizer pushes down the join to the source. The current source cannot handle more then 1600 columns.
Example: When trying to join Member_DX1 and Member_DX2 at client, JDV pushes the enter code herecombined join to postgres as one getting the too max column error.
/* TABLE 1 */
CREATE VIEW Member_DX1 (
MEMB_BID Integer
, DX130402000000 Integer
, DX180608000000 Integer
, DX20401070000 Integer
.... /* 1000 more */
as
SELECT dx.memb_bid
, case dx.EPI_1_DX4 when 130402000000 then 1 else 0 END as DX130402000000
, case dx.EPI_1_DX4 when 180608000000 then 1 else 0 END as DX180608000000
, case dx.EPI_1_DX4 when 20401070000 then 1 else 0 END as DX20401070000
...
FROM BDR.ENH_EPI_DETAIL dx
/* TABLE 2 */
CREATE VIEW Member_DX2 (
MEMB_BID Integer
, DX200102010000 Integer
, DX90125000000 Integer
, DX160603070000 Integer
... /* 1000 more ...
SELECT dx.memb_bid /* FOREIGN TABLE */
, case dx.EPI_1_DX4 when 200102010000 then 1 else 0 END as DX200102010000
, case dx.EPI_1_DX4 when 90125000000 then 1 else 0 END as DX90125000000
, case dx.EPI_1_DX4 when 160603070000 then 1 else 0 END as DX160603070000
...`enter code here`
FROM BDR.ENH_EPI_DETAIL dx
then my query in (e.g. dBeaver) looks like this:
SELECT * from Member_DX1 dx1
join Member_DX2 dx2
on dx1.MEMB_BID = dx2.MEMB_BID
The current source cannot handle more then 1600 columns.
Can you capture that as an issue for Teiid? Then we can take appropriate compensating action automatically.
then my query in (e.g. dBeaver) looks like this:
If you see this issue affecting all of your user queries, then you can turn join support off at the translator level via translator overrides - SupportsInnerJoin, SupportsOuterJoins, etc.. If there is a pk/fk relationship and you can modify the metadata, you can add an extension property allow-join as false to prevent the pushdown - see Join Compensation http://teiid.github.io/teiid-documents/master/content/reference/Federated_Optimizations.html

very large fields in As400 ISeries database

I would like to save a large XML string (possibly longer than 32K or 64K) into an AS400 file field. Either DDS or SQL files would be OK. Example of SQL file below.
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (70K ) ALLOCATE(1000) NOT NULL WITH DEFAULT)
We would use RPGLE to read and write to fields.
The goal is to then pull out data via ODBC connection on a client side.
AS400 character fields seem to have 32K limit, so this is not great option.
What options do I have? I have been reading up on CLOBs but there appear to be restrictions writing large strings to CLOBS and reading CLOB field remotely. Note that client is (still) on v5R4 of AS400 OS.
thanks!
Charles' answer below shows how to extract data. I would like to insert data. This code runs, but throws a '22501' SQL error.
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
//eval longdesc = *ALL'123';
eval Wlongdesc = '123';
exec SQL
INSERT INTO PRODUCT (PRODCODE, PRODDESC, LONGDESC)
VALUES (123, 'Product Description', :LongDesc );
if %subst(sqlstt:1:2) <> '00';
// an error occurred.
endif;
// get length explicitly, variables are setup by pre-processor
longdesc_len = %len(%trim(longdesc_data));
wLongDesc = %subst(longdesc_data:1:longdesc_len);
/end-free
C Eval *INLR = *on
C Return
Additional question: Is this technique suitable for storing data which I want to extract via ODBC connection later? Does ODBC read CLOB as pointer or can it pull out text?
At v5r4, RPGLE actually supports 64K character variables.
However, the DB is limited to 32K for regular char/varchar fields.
You'd need to use a CLOB for anything bigger than 32K.
If you can live with 64K (or so )
CREATE TABLE MYLIB/PRODUCT
(PRODCODE DEC (5 ) NOT NULL WITH DEFAULT,
PRODDESC CHAR (30 ) NOT NULL WITH DEFAULT,
LONGDESC CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
You can use RPGLE SQLTYPE support
D code S 5s 0
d wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
/free
exec SQL
select prodcode, longdesc
into :code, :longdesc
from mylib/product
where prodcode = :mykey;
wLongDesc = %substr(longdesc_data:1:longdesc_len);
DoSomthing(wLongDesc);
The pre-compiler will replace longdesc with a DS defined like so:
D longdesc ds
D longdesc_len 10u 0
D longdesc_data 65531a
You could simply use it directly, making sure to only use up to longdesc_len or covert it to a VARYING as I've done above.
If absolutely must handle larger than 64K...
Upgrade to a supported version of the OS (16MB variables supported)
Access the CLOB contents via an IFS file using a file reference
Option 2 is one I've never seen used....and I can't find any examples. Just saw it mentioned in this old article..
http://www.ibmsystemsmag.com/ibmi/developer/general/BLOBs,-CLOBs-and-RPG/?page=2
This example shows how to write to a CLOB field in Db2 database... with help from Charles and Mr Murphy's feedback.
* ----------------------------------------------------------------------
* Create table with CLOB:
* CREATE TABLE MYLIB/PRODUCT
* (MYDEC DEC (5 ) NOT NULL WITH DEFAULT,
* MYCHAR CHAR (30 ) NOT NULL WITH DEFAULT,
* MYCLOB CLOB (65531) ALLOCATE(1000) NOT NULL WITH DEFAULT)
* ----------------------------------------------------------------------
D PRODCODE S 5i 0
D PRODDESC S 30a
D i S 10i 0
D wLongDesc s 65531a varying
D longdesc s sqltype(CLOB:65531)
D* Note that variables longdesc_data and longdesc_len
D* get create automatocally by SQL pre-processor.
/free
eval wLongdesc = '123';
longdesc_data = wLongDesc;
longdesc_len = %len(%trim(wLongDesc));
exec SQL set option commit = *none;
exec SQL
INSERT INTO PRODUCT (MYDEC, MYCHAR, MYCLOB)
VALUES (123, 'Product Description',:longDesc);
if %subst(sqlstt:1:2)<>'00' ;
// an error occurred.
endif;
Eval *INLR = *on;
Return;
/end-free

How do I write a macro that scans a lookup table stored in a SAS data set?

I have a very short (<20 rows) data set that looks like:
Effective_Date Pct
-------------- ---
01JAN2000 50%
...
11FEB2014 55%
13JUL2014 65%
I'd like to write a macro which takes a date, Eval_Date and returns the Pct which was effective on that date. To be clear, I know that this can be done with some kind of PROC SQL construction, but I want to write a function-style macro that can be used in the data step.
For example, %pct('12jul2014'd) should evaluate to 55%.
Assuming your source dataset is effpct and pct is numeric, formatted as percent., use it to create a format containing every day and the effective percent:
/* Merge without a by statement, using firstobs=2 to do a look-ahead join to
determine the 'effective to' date */
data pct_fmt ;
retain fmtname 'EFFPCT' type 'N' ;
merge effpct
effpct (firstobs=2 keep=effective_date rename=(effective_date=to_date)) ;
if missing(to_date) then to_date = date() ; /* Take last record up to current date */
do start = effective_date to (to_date - 1) ;
label = pct ;
output ;
end ;
run ;
/* 'Compile' the format */
proc format cntlin=pct_fmt ; run ;
/* Abstract put(var,format) into a function-style macro */
%MACRO PCT(DT) ;
put(&DT,EFFPCT.) ;
%MEND ;
/* Then use it in a datastep... */
data want ;
input date date9. ;
eff_pct = %PCT(date) ;
format eff_pct percent9. ;
datalines ;
01JAN2000
13FEB2014
20JUL2014
;
run ;
Or alternatively, use %SYSFUNC and putn to be able to convert a date to percent outside of a datastep, e.g. in a title statement :
%MACRO PCT2(DT) ;
%SYSFUNC(putn(%SYSFUNC(putn(&DT,EFFPCT.)),percent.))
%MEND ;
title "The effective pct on 09JUL2013 was %PCT2('09jul2013'd)" ;

Oracle error ORA-01722 while updating DECIMAL value

I'm using ODP to update an Oracle 10g DB with no success updating decimal values.
Ex:
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60.4 WHERE NB = '2143'
Result: 604 in the var column ('.' disappears)
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = 60,4 WHERE NB = '2143'
Result: INVALID NUMBER
UPDATE usertable.fiche SET DT_MAJ = '20110627',var = ‘60,4’ WHERE NB = '2143'
Result: INVALID NUMBER
I also tried to use TO_NUMBER function without any success.
Any idea on the correct format I should use?
Thanks.
You didn't give us much to go on (only the insert statements, not the casting of types or what not)
but here is a test case that shows the how to do it.
create table numTest(numA number(3) ,
numB number(10,8) ,
numC number(10,2) )
/
--test insert
insert into numTest(numA, numB, numC) values (123, 12.1241, 12.12)
/
select * from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
123 12.1241 12.12
*/
--delete to start clean
rollback
/
/*by marking these table.col%type we can change the table type and not have to worry about changing these in the future!*/
create or replace procedure odpTestNumberInsert(
numA_in IN numTest.numA%type ,
numB_in IN numTest.numB%type ,
numC_in IN numTest.numC%type)
AS
BEGIN
insert into numTest(numA, numB, numC) values (numA_in, numB_in, numC_in) ;
END odpTestNumberInsert ;
/
begin
odpTestNumberInsert(numA_in => 10
,numB_in => 12.55678
,numC_in => 13.13);
odpTestNumberInsert(numA_in => 20
,numB_in => 30.667788
,numC_in => 40.55);
end ;
/
select *
from numTest
/
/*
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
10 12.55678 13.13
20 30.667788 40.55
*/
rollback
/
okay, so we have created a table, got data in it (removed it), created a procedure to verify it works (then rollback the changes) and all looks good. So let's go to the .net side (I'll assume C#)
OracleCommand cmd = new OracleCommand("odpTestNumberInsert", con);
cmd.CommandType = CommandType.StoredProcedure;
cmd.BindByName = true;
OracleParameter oparam0 = cmd.Parameters.Add("numA_in", OracleDbType.Int64);
oparam0.Value = 5 ;
oparam0.Direction = ParameterDirection.Input;
decimal deciVal = (decimal)55.556677;
OracleParameter oparam1 = cmd.Parameters.Add("numB_in", OracleDbType.Decimal);
oparam1.Value = deciVal ;
oparam1.Direction = ParameterDirection.Input;
OracleParameter oparam2 = cmd.Parameters.Add("numC_in", OracleDbType.Decimal);
oparam2.Value = 55.66 ;
oparam2.Direction = ParameterDirection.Input;
cmd.ExecuteNonQuery ();
con.Close();
con.Dispose();
And then to finish things off:
select *
from numTest
/
NUMA NUMB NUMC
---------------------- ---------------------- ----------------------
5 55.556677 55.66
all of our data was inserted.
Without more code on your part I would recommend that you verify that the correct param is being passed in and assoc. to the insert. the above proves it works.
You Should Not re-cast your variables via a TO_NUMBER when you can do so when creating the parameters.
I found the problem just after posting my question !!! I was not looking at the right place... Oracle update was not concerned at all. The problem was in the Decimal.parse method I was using to convert my input string (containing a coma as decimal separator) into the decimal number (with a dot as decimal separatot) I wanted to update in the DB. The thing is that the system culture is not the same on my own development computer than on the client computer, even if they both run in the same country. Then the parse was perfectly working on my computer but was removing the decimal character on the client production environment. I finally just put in place a "replace" coma by dot and everything goes well now. Thanks again for your time.