IIB sometimes reads garbled text from an Oracle CLOB - oracle12c

I have an IIBv10 (fix pack No 19) application which reads a CLOB field from an Oracle (v12) database and stores into a shared CHARACTER variable. I also write the variable content into a log. It usually works perfectly but sometimes in some environments I see that the text from the variable is different from the database text (e. g., one character might be different). What can be the reason for that?
It seems to be reproducing sporadically, both in a multi- and a single-instance environment.
Sample code (MY_TABLE has a CLOB field and MY)
DECLARE MY_CACHE SHARED ROW;
DECLARE mySelectStatement CHARACTER 'SELECT * FROM MY_TABLE';
SET MY_CACHE.Item[] = PASSTHRU(mySelectStatement);

Related

Clob(length 1048576) DB2 to PostgreSQL best datatype?

We had a table with a column with a Clob(length 1048576) that would store search text that helps with searching. When I transferred it from DB2 to Postgres in our migration, I found that it wasn't working as well. So I going to try text or varchar, but I was finding it would take much longer for the long text entries to be added to the table to the point my local wildfly window would timeout when trying to run.
What is the equilavelent of a datatpye that accepts text that I should be using in postgres to replace a Clob that was length 1048576 in DB2? It might be that I was using the right datatypes but didn't have the right corresponding size.
Use text. That is the only reasonable data type for long character strings.

DB2 DBCLOB data INSERT with Unicode data

The problem at hand is to insert data into a DB2 table which has a DBCLOB column. The table's encoding is Unicode. The subsystem is a MIXED YES with Japanese CCSID set of (290, 930, 300). The application is bound ENCODING CCSID.
I was successful in FETCHING the DBCLOB's data in Unicode, no problem there. But when I turn around and try to INSERT it back, the data inserted is being interpreted as not being Unicode, seems DB2 thinks its EBCDIC DBCS/GRAPHIC, and the inserted row shows Unicode 0xFEFE. When I manually update the data being inserted to valid DBCS then the data inserts OK and shows the expected Unicode DBCS values.
To insert the data I am using a dynamically prepared INSERT statement with a placeholder for the DBCLOB column. The SQLVAR entry associated with the placeholder is a DBCLOB_LOCATOR with the CCSID set to 1200.
A DBCLOB locator is being created doing a SET dbclobloc = SUBSTR(dbclob, 1, length). The created locator is being put into SQLDA. Then the prepared INSERT is being executed.
It seems DB2 is ignoring the 1200 CCSID associated with the DBCLOB_LOCATOR SQLVAR. Attempts to put a CAST(? AS DBCLOB CCSID UNICODE) on the placeholder in the INSERT do not help because at that time DB2 seems to have made up its mind about the encoding of the data to be inserted.
I am stuck :( Any ideas?
Greg
I think I figured it out and it is not good: the SET statement for the DBCLOB_LOCATOR is static SQL and the DBRM is bound ENCODING EBCDIC. Hence DB2 has no choice but to assume the data is in the CCSID of the plan.
I also tried what the books suggest and used a SELECT ... FROM SYSIBM.SYSDUMMYU to set the DBCLOB_LOCATOR. This should have told DB2 that the data was coming in Unicode. But it failed again, with symptoms indicating it still assumed the DBCS EBCDIC CCSID.
Not good.

The type of column conflit with the type of other columns specified in the UNPIVOT list

In SQL Server 2005, I built a trigger that contains a SQL statement that unpivot's some data. Somewhat similar to the following simple example: http://sqlfiddle.com/#!3/cdc1b/1/0. Let's say that the table the trigger is built on is "table1" and it's set run after updates.
Within SSMS whenever I update "table1" everything works fine. Unfortunately, whenever I update "table1" in a proprietary application (which I don't have the source code to), it fails with the message "The type of column conflit with the type of other columns specified in the UNPIVOT list".
After doing a bit of searching I added COLLATE DATABASE_DEFAULT to my cast's in the view without any luck. It was a bit of a long shot because the collation all matched whenever I queried INFORMATION_SCHEMA.COLUMNS.
I then changed the casts from VARCHAR to CHAR and it worked without issue. For obvious reasons, I'd rather use VARCHAR. What is different between a SSMS and application connection? I assume the application isn't using a connection property that SSMS uses.
PS: The database is a bit funky because it does not use NULLs and uses CHAR instead of VARCHAR.

Change the character varying length with ALTER statement?

I am trying to change length of a column from character varying(40) to character varying(100).
Following the method described in this question Increasing the size of character varying type in postgres without data loss
ALTER TABLE info_table ALTER COLUMN docs TYPE character varying(100);
Tried with this command but returning syntax error
ERROR: syntax error at or near "TYPE" at character 52
Is there any change needed in this command? Using PostgreSQL version 7.4.30 (upgrade to 9.2 in process :) ).
I tried this same command in test db which is now upgraded with version 9.2. It is working fine there.
Changing the column type on the fly was not possible in the ancient version 7.4. Check the old manual. You had to add another column, update it with the (possibly transformed) values and then drop the old one, rename the new one - preferably in a single transaction. With side effects on views or other depending objects ...
To avoid this kind of problem altogether I suggest to use plain text or varchar (without length modifier) for character data. Details in this related question.
Remove the word TYPE, that syntax wasn't recognized 10 years ago, but you should be fine without it.

How to change Oracle 10gr2 express edition's default character set

I installed oracle 10gr2 express edition on my laptop.
when I import a .dmp file which is generated by oracle 10gr2 enterprise edition, error occurs.
the database server which generated the .dmp file is running with GBK charset, but my oracle express server is running with UTF-8.
SQL> select userenv('language') from dual;
USERENV('LANGUAGE')
--------------------------------------------------------------------------------
SIMPLIFIED CHINESE_CHINA.AL32UTF8
how can I configure my own oracle server to import the .dmp file?
edit ---------------------------------------------------
my own oracle express server:
SQL> select * from v$nls_parameters where parameter like '%CHARACTERSET';
PARAMETER
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
NLS_CHARACTERSET
AL32UTF8
NLS_NCHAR_CHARACTERSET
AL16UTF16
The new character set requires up to 4 bytes per character while the old one only required up to 2 bytes. So due to the character set change, some character fields will require more space than before. Obviously, some of them have now hit the column lenght limit.
To resolve it, you'll have to increase the length of the affected columns or change the length semantics so the length is interpreted in characters (and not in bytes, which is the default).
If your dump file contains both the schema definition and the data, you'll have to work in phases: first import the schema only, the increase the column lengths and finally import the data.
I have no experience with the length semantics. I usually specify it explicit. See the documentation about the NLS_LENGTH_SEMANTICS parameter for information. It affects how the number 100 in the following statement is interpreted:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100)
);
Usually, it's better to be explicit and specify the unit directly:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100 CHAR)
);
The dump file contains a whole schema, alter column length is not a good option for me.
the Oracle Express edition use UTF-8 as default, after googled the web, I found a way to alter the database character set.
in my caseļ¼š
UTF-8 --> GBK
I connected with user sys as sysdba in sqlplus. then executed following commands:
shutdown immediate
startup mount
alter system enable restricted session ;
alter system set JOB_QUEUE_PROCESSES=0;
alter system set AQ_TM_PROCESSES=0;
alter database open;
alter database character set internal_use ZHS16GBK ;
shutdown immediate
startup
I don't know what these commands done to my database, but It works.