DSNUTILB using PRESORTED option - db2

I'm working on changing a DSNUTILB parameter member to allow for the input data
now being pre-sorted in order to improve efficiency.
However experiencing syntax issues with incorporating PRESORTED option.
At the moment I have:
LOAD DATA PRELOAD LOAD ORDER PRESORTED LOG NO
and get this error message
INVALID OPERAND 'PRELOAD' FOR KEYWORD 'LOAD'
(have tried various permutations)
What is the correct syntax?

I don't think that PRELOAD is a valid keyword. Here is a syntax reference for the LOAD utility. I'm assuming here you are using DB2 for z/OS. My link is for version 12.
You'll probably want to have a SYSIN that looks something like:
LOAD DATA INDDN SYSREC01
KEEPDICTIONARY
RESUME NO REPLACE
PRESORTED YES
ENFORCE CONSTRAINTS
LOG NO NOCOPYPEND
EBCDIC CCSID(0037)
INTO TABLE abc.xyx (
... table definition here...
)
You can have an UNLOAD utility create the SYSIN for you with something like:
UNLOAD TABLESPACE
EXECUTE NO
OPTIONS LOADOPT (
KEEPDICTIONARY
RESUME NO REPLACE
PRESORTED YES
ENFORCE CONSTRAINTS
LOG NO NOCOPYPEND
)
LOADINDDN YES LOCK NO QUIESCE NO
SELECT * FROM abc.xyz
FORMAT DSNTIAUL
LOADDDN SYSCTL01
OUTDDN SYSREC01
It will write the load statement to the SYSREC01 data set.

Related

Db2 for I: Cpyf *nochk emulation

In the IBM i system there's a way to copy a from a structured file to one without structure using Cpyf *nochk.
How can it be done with sql?
The answer may be "You can't", not if you are using DDL defined tables anyway. The problem is that *NOCHK just dumps data into the file like a flat file. Files defined with CRTPF, whether they have source, or are program defined, don't care about bad data until read time, so they can contain bad data. In fact you can even read bad data out of a file if you use a program definition for that file.
But, an SQL Table (one defined using DDL) cannot contain bad data. No matter how you write it, the database validates the data at write time. Even the *NOCHK option of the CPYF command cannot coerce bad data into an SQL table.
There really isn't an easy way
Closest would be to just build a big character string using CONCAT...
insert into flatfile
select mycharfld1
concat cast(myvchar as char(20))
concat digits(zonedFld3)
from mytable
That works for fixed length, varchar (if casted to char) and zoned decimal...
Packed decimal would be problematic..
I've seen user defined functions that can return the binary character string that make up a packed decimal...but it's very ugly
I question why you think you need to do this.
You can use QSYS2.QCMDEXC stored procedure to execute OS commands.
Example:
call qsys2.qcmdexc ( 'CPYF FROMFILE(QTEMP/FILE1) TOFILE(QTEMP/FILE2) MBROPT(*replace) FMTOPT(*NOCHK)' )

Unable to export data to PostgreSQL from Oracle

I have to extract data from Oracle tables and copy them to PostgreSQL. I am able to map both the input and output files. On running the connector component I get the proper row fetching graphical image, but when I go to the table there is no such data.
This one is for PostgreSQl to PostgreSQL:
TRACE_DEBUG result
After trace debug this is what I get
Are you trying to read and write on the same table in Input and Output ? (this could be a problem).
What kind of action are you using in the output , insert, update, insert or update ?
Did you check if there is a lock on your output table ?
Depending on the settings on your database connection, you may need to turn on auto commit or add an explicit commit component at the end of the Flow.
How is the output component configured ?
operation type : insert?
is it doing a lookup?
is the table name correct ?
did you check the error code global value for the component After it finishes ?

How to assign csv field value to SQL query written inside table input step in Pentaho Spoon

I am pretty new to Pentaho so my query might sound very novice.
I have written a transformation in which am using CSV file input step and table input step.
Steps I followed:
Initially, I created a parameter in transformation properties. The
parameter birthdate doesn't have any default value set.
I have used this parameter in postgresql query in table input step
in the following manner:
select * from person where EXTRACT(YEAR FROM birthdate) > ${birthdate};
I am reading the CSV file using CSV file input step. How do I assign the birthdate value which is present in my CSV file to the parameter which I created in the transformation?
(OR)
Could you guide me the process of assigning the CSV field value directly to the SQL query used in the table input step without the use of a parameter?
TLDR;
I recommend using a "database join" step like in my third suggestion below.
See the last image for reference
First idea - Using Table Input as originally asked
Well, you don't need any parameter for that, unless you are going to provide the value for that parameter when asking the transformation to run. If you need to read data from a CSV you can do that with this approach.
First, read your CSV and make sure your rows are ok.
After that, use a select values to keep only the columns to be used as parameters.
In the table input, use a placeholder (?) to determine where to place the data and ask it to run for each row that it receives from the source step.
Just keep in ming that the order of columns received by the table input (the columns out of the select values) is the same order that it will be used for the placeholders (?). This should not be a problem with your question that uses only one placeholder, but keep that in mind as you ramp up using Pentaho.
Second idea, using a Database Lookup
This is another approach where you can't personalize the query made to the database and may experience a better performance because you can set a "Enable cache" flag and if you don't need to use a function on your where clause this is really recommended.
Third idea, using a Database Join
That is my recommended approach if you need a function on your where clause. It looks a lot like the Table Input approach but you can skip the select values step and select what columns to use, repeat the same column a bunch of times and enable a "outer join" flag that returns the rows without result from the query
ProTip: If you feel the transformation running too slow, try to use multiple copies from the step (documentation here) and obviously make sure the table have the appropriate indexes in place.
Yes there's a way of assigning directly without the use of parameter. Do as follows.
Use Block this step until steps finish to halt the table input step till csv input step completes.
Following is how you configure each step.
Note:
Postgres query should be select * from person where EXTRACT(YEAR
FROM birthdate) > ?::integer
Check Execute for each row and Replace variables in in Table input step.
Select only the birthday column in CSV input step.

How to optimize generic SQL to retrieve DDL information

I have a generic code that is used to retrieve DDL information from a Firebird database (FB2.1). It generates SQL code like
SELECT * FROM MyTable where 'c' <> 'c'
I cannot change this code. Actually, if that matters, it is inside Report Builder 10.
The fact is that some tables from my database are becoming a litle too populated (>1M records) and that query is starting to take too long to execute.
If I try to execute
SELECT * FROM MyTable where SomeIndexedField = SomeImpossibleValue
it will obviously use that index and run very quickly.
Well, it wouldn´t be that hard to the database find out that that is an impossible matcher and make some sort of optimization and avoid testing it against each row.
Is there any way to make my firebird database to optimize that search?
As the filter condition is a negative proposition (and also doesn't refer a column to search, but only a value to compare to another value), Firebird need to do a full table scan (without use any index) to confirm that aren't any record that meet your criteria.
If you can't change you need to wait for the upcoming 3.0 version, that will implement the Boolean data type, and therefore should start to evaluate "constant" fake comparisons in advance (maybe the client library will do this evaluation before send the statement to the server?).

db2 SQLCODE=-243, SQLSTATE=36001 ERROR

I am using the DB2Driver in my code like
Class.forName("com.ibm.db2.jcc.DB2Driver");
and I am getting the result set in my java code which is scroll sensitive. my sql query look like this select distinct day , month , year from XXX . here table XXX is read only for the user which I am using ... so it is giving the following error
com.ibm.db2.jcc.a.SqlException: DB2 SQL Error: SQLCODE=-243, SQLSTATE=36001, SQLERRMC=SQL_CURSH200C3, DRIVER=3.51.90 .. I know this is the problem of read only .. but when i try to execute the same query in db2 control center it is working
please help me out in this
PubLib is your friend :-)
SQL0243NSENSITIVE cursor <cursor-name> cannot be defined for the specified SELECT statement.
Explanation:
Cursor <cursor-name> is defined as SENSITIVE, but the content of the SELECT statement requires DB2 to build a temporary result table of the cursor, and DB2 cannot guarantee that changes made outside this cursor will be visible. This situation occurs when the content of the query makes the result table read-only. For example, if the query includes a join, the result table is read-only. In these cases, the cursor must be defined as INSENSITIVE or ASENSITIVE.
The statement cannot be processed.
User response:
Either change the content of the query to yield a result table that is not read-only, or change the type of the cursor to INSENSITIVE or ASENSITIVE.
If you can't change the cursor type, look in to the use of materialised queriey tables. These are like views but also provide temporary backing storage for the data so that it's not forced read-only by the query type.
Whether that will help in situations where you've forced the user to be read only, I'm not entirely sure but you may be able to have different permission on the materialised data and real data (unfortunately, I haven't done a lot of work with these, certainly none where permissions were locked down to read-only level).