How to change Oracle 10gr2 express edition's default character set - oracle10g

I installed oracle 10gr2 express edition on my laptop.
when I import a .dmp file which is generated by oracle 10gr2 enterprise edition, error occurs.
the database server which generated the .dmp file is running with GBK charset, but my oracle express server is running with UTF-8.
SQL> select userenv('language') from dual;
USERENV('LANGUAGE')
--------------------------------------------------------------------------------
SIMPLIFIED CHINESE_CHINA.AL32UTF8
how can I configure my own oracle server to import the .dmp file?
edit ---------------------------------------------------
my own oracle express server:
SQL> select * from v$nls_parameters where parameter like '%CHARACTERSET';
PARAMETER
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
NLS_CHARACTERSET
AL32UTF8
NLS_NCHAR_CHARACTERSET
AL16UTF16

The new character set requires up to 4 bytes per character while the old one only required up to 2 bytes. So due to the character set change, some character fields will require more space than before. Obviously, some of them have now hit the column lenght limit.
To resolve it, you'll have to increase the length of the affected columns or change the length semantics so the length is interpreted in characters (and not in bytes, which is the default).
If your dump file contains both the schema definition and the data, you'll have to work in phases: first import the schema only, the increase the column lengths and finally import the data.
I have no experience with the length semantics. I usually specify it explicit. See the documentation about the NLS_LENGTH_SEMANTICS parameter for information. It affects how the number 100 in the following statement is interpreted:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100)
);
Usually, it's better to be explicit and specify the unit directly:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100 CHAR)
);

The dump file contains a whole schema, alter column length is not a good option for me.
the Oracle Express edition use UTF-8 as default, after googled the web, I found a way to alter the database character set.
in my case:
UTF-8 --> GBK
I connected with user sys as sysdba in sqlplus. then executed following commands:
shutdown immediate
startup mount
alter system enable restricted session ;
alter system set JOB_QUEUE_PROCESSES=0;
alter system set AQ_TM_PROCESSES=0;
alter database open;
alter database character set internal_use ZHS16GBK ;
shutdown immediate
startup
I don't know what these commands done to my database, but It works.

Related

IIB sometimes reads garbled text from an Oracle CLOB

I have an IIBv10 (fix pack No 19) application which reads a CLOB field from an Oracle (v12) database and stores into a shared CHARACTER variable. I also write the variable content into a log. It usually works perfectly but sometimes in some environments I see that the text from the variable is different from the database text (e. g., one character might be different). What can be the reason for that?
It seems to be reproducing sporadically, both in a multi- and a single-instance environment.
Sample code (MY_TABLE has a CLOB field and MY)
DECLARE MY_CACHE SHARED ROW;
DECLARE mySelectStatement CHARACTER 'SELECT * FROM MY_TABLE';
SET MY_CACHE.Item[] = PASSTHRU(mySelectStatement);

PostgreSQL: varchar(1) and Umlaut

I have a VARCHAR(1) field in postgresql.
Now I export data from a postgresql 9.4 server with pg_dump
and import it to a postgresql 9.5 server with pgsql.
When I import it, I get an error:
ERROR: value too long for type character varying(1) COPY XXX "Ö"
That means in the table there is the value "Ö" which takes 2 bytes instead of 1 byte.
Must I increase the column to VARCHAR(2)?
Is there another way to keep VARCHAR(1) and use a locale etc.?
Why could this data ever be stored there?
Thanks for your help!
Easy fix:
The encoding of the target database was wrong and had to be set to UTF8.

ORA-12899 - value too large for column when upgrading to Oracle 12C

My project is going through a tech upgrade so we are upgrading Oracle DB from 11g to 12c. SAP DataServices is upgraded to version 14.2.7.1156.
The tables in Oracle 12C is defaulted to varchar (byte) when it shoud be varchar (char). I understand this is normal. So, I altered the session for each datastore running
`ALTER session SET nls_length_semantics=CHAR;`
When I create a new table, with varchar (1), I am able to load unicode characters like Chinese characters (i.e 东) into the new table from Oracle.
However, when I try to load the same unicode character via SAPDS into the same table, it throws me an error 'ORA-12899 - value too large for column'. My datastore settings are:
Locale
Language: default
Code Page: utf-8
Server code page: utf-8
Additional session parameters:
ALTER session SET nls_length_semantics=CHAR
I would really appreciate to know what settings I need to change in my SAP BODS since my Oracle seems to be working fine.
I think, you should consider modifying table column from varchar2(x BYTE) to varchar2(x CHAR) to allow Unicode (UTF-8 format) data and avoid ORA-12899 .
create table test1 (name varchar2(100));
insert into test1 values ('east');
insert into test1 values ('东');
alter table test1 modify name varchar2(100 char);
-- You can check 'char_used' for each column like -
select column_name, data_type, char_used from user_tab_columns where table_name='TEST1';

DB2 DBCLOB data INSERT with Unicode data

The problem at hand is to insert data into a DB2 table which has a DBCLOB column. The table's encoding is Unicode. The subsystem is a MIXED YES with Japanese CCSID set of (290, 930, 300). The application is bound ENCODING CCSID.
I was successful in FETCHING the DBCLOB's data in Unicode, no problem there. But when I turn around and try to INSERT it back, the data inserted is being interpreted as not being Unicode, seems DB2 thinks its EBCDIC DBCS/GRAPHIC, and the inserted row shows Unicode 0xFEFE. When I manually update the data being inserted to valid DBCS then the data inserts OK and shows the expected Unicode DBCS values.
To insert the data I am using a dynamically prepared INSERT statement with a placeholder for the DBCLOB column. The SQLVAR entry associated with the placeholder is a DBCLOB_LOCATOR with the CCSID set to 1200.
A DBCLOB locator is being created doing a SET dbclobloc = SUBSTR(dbclob, 1, length). The created locator is being put into SQLDA. Then the prepared INSERT is being executed.
It seems DB2 is ignoring the 1200 CCSID associated with the DBCLOB_LOCATOR SQLVAR. Attempts to put a CAST(? AS DBCLOB CCSID UNICODE) on the placeholder in the INSERT do not help because at that time DB2 seems to have made up its mind about the encoding of the data to be inserted.
I am stuck :( Any ideas?
Greg
I think I figured it out and it is not good: the SET statement for the DBCLOB_LOCATOR is static SQL and the DBRM is bound ENCODING EBCDIC. Hence DB2 has no choice but to assume the data is in the CCSID of the plan.
I also tried what the books suggest and used a SELECT ... FROM SYSIBM.SYSDUMMYU to set the DBCLOB_LOCATOR. This should have told DB2 that the data was coming in Unicode. But it failed again, with symptoms indicating it still assumed the DBCS EBCDIC CCSID.
Not good.

Table invisible in PostgreSQL - Undefined relation issue at different sessions

I have executed the following create statement using SQLWorkbench at my target postgresql database:
CREATE TABLE Config (
id serial PRIMARY KEY,
pub_ip_range_low varchar(100),
pub_ip_range_high varchar(100)
);
Right after table creation I request the table content by typing 'select * from config;' and see that table could be retrieved. Nevertheless, my java program that uses JDBC type 4 driver cannot access the table when I issue the same select statement in it. An exception is thrown when the program tries to access it which says says "Undefined relation" for the config table.
My questions are:
Why sqlworkbench where I had previously run the create statement recognizes the table while my java program cannot find it?
Where does the postgressql DBMS puts the tables I created? I don't see them neither in public nor in information schema.
NOTE:
I checked target postgres database and cannot see the table Config anywhere although SQL workbench can query it. Then I opened another SQL workbench instance and noticed that the table cannot be queried (i.e. not found). So, my conclusion is that PostgreSQL puts the table I created in the first running SQLBench instance into some location that is bound to that session. Another SQL Workbench instance or my java program is not bound to session, so cannot query the previously created table config.
The only "bloody location" that is session-local in PostgreSQL is the schema pg_temp, in other words: temporary tables. But your CREATE command does not display the keyword TEMP[ORARY]. Of course, as long as the transaction is not commited, nobody sees anything outside the transaction.
It's more likely you are seeing a switcheroo of hosts / databases / ports / or the schema search_path. A mixup with the mixed-case table name is a hot candidate, too. If you don't double-quote "Config", the table ends up all lower case in the system, so: config. If you later double quote the name, it won't match. The manual has the details.
Maybe the create failed on the extra trailing comma?
CREATE TABLE config (
id serial PRIMARY KEY,
pub_ip_range_low varchar(100),
pub_ip_range_high varchar(100) -- >> ,
);