ORA-12899 - value too large for column when upgrading to Oracle 12C - unicode

My project is going through a tech upgrade so we are upgrading Oracle DB from 11g to 12c. SAP DataServices is upgraded to version 14.2.7.1156.
The tables in Oracle 12C is defaulted to varchar (byte) when it shoud be varchar (char). I understand this is normal. So, I altered the session for each datastore running
`ALTER session SET nls_length_semantics=CHAR;`
When I create a new table, with varchar (1), I am able to load unicode characters like Chinese characters (i.e 东) into the new table from Oracle.
However, when I try to load the same unicode character via SAPDS into the same table, it throws me an error 'ORA-12899 - value too large for column'. My datastore settings are:
Locale
Language: default
Code Page: utf-8
Server code page: utf-8
Additional session parameters:
ALTER session SET nls_length_semantics=CHAR
I would really appreciate to know what settings I need to change in my SAP BODS since my Oracle seems to be working fine.

I think, you should consider modifying table column from varchar2(x BYTE) to varchar2(x CHAR) to allow Unicode (UTF-8 format) data and avoid ORA-12899 .
create table test1 (name varchar2(100));
insert into test1 values ('east');
insert into test1 values ('东');
alter table test1 modify name varchar2(100 char);
-- You can check 'char_used' for each column like -
select column_name, data_type, char_used from user_tab_columns where table_name='TEST1';

Related

Create table with a column of data-type Date creates a column with data-type Timestamp

The following SQL Query:
CREATE TABLE "SomeTable" ("dateEnd" DATE)
Creates a table SomeTable with a column dateEnd. However, the database-type is Timestamp, not Date. It used to work, but after reimporting a whole database dump, all the Date data-types are replaced by Timestamp data-types. Even If I create a very simple table, like the one above, the data-type jumps to Timestamp. I am using DB2 express c version 11.1.0.
If your Db2 database was created in Oracle Compatibility mode, then DATE columns are implemented as TIMESTAMP(0) columns to match what Oracle does.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.apdv.porting.doc/doc/r0053667.html
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.config.doc/doc/r0054912.html
BTW you may want to use either Db2 Developer-C or Db2 Developer Community Edition. Those are effectively replacing the old Express-C edition
https://www.ibm.com/uk-en/marketplace/ibm-db2-direct-and-developer-editions

Generating a UUID in Postgres for Insert statement?

My question is rather simple. I'm aware of the concept of a UUID and I want to generate one to refer to each 'item' from a 'store' in my DB with. Seems reasonable right?
The problem is the following line returns an error:
honeydb=# insert into items values(
uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
ERROR: function uuid_generate_v4() does not exist
LINE 2: uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I've read the page at: http://www.postgresql.org/docs/current/static/uuid-ossp.html
I'm running Postgres 8.4 on Ubuntu 10.04 x64.
uuid-ossp is a contrib module, so it isn't loaded into the server by default. You must load it into your database to use it.
For modern PostgreSQL versions (9.1 and newer) that's easy:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
but for 9.0 and below you must instead run the SQL script to load the extension. See the documentation for contrib modules in 8.4.
For Pg 9.1 and newer instead read the current contrib docs and CREATE EXTENSION. These features do not exist in 9.0 or older versions, like your 8.4.
If you're using a packaged version of PostgreSQL you might need to install a separate package containing the contrib modules and extensions. Search your package manager database for 'postgres' and 'contrib'.
Without extensions (cheat)
If you need a valid v4 UUID
SELECT uuid_in(overlay(overlay(md5(random()::text || ':' || random()::text) placing '4' from 13) placing to_hex(floor(random()*(11-8+1) + 8)::int)::text from 17)::cstring);
Thanks to #Denis Stafichuk #Karsten and #autronix
Or you can simply get UUID-like value by doing this (if you don't care about the validity):
SELECT uuid_in(md5(random()::text || random()::text)::cstring);
output>> c2d29867-3d0b-d497-9191-18a9d8ee7830
(works at least in 8.4)
PostgreSQL 13 supports natively gen_random_uuid ():
PostgreSQL includes one function to generate a UUID:
gen_random_uuid () → uuid
This function returns a version 4 (random) UUID. This is the most commonly used type of UUID and is appropriate for most applications.
db<>fiddle demo
The answer by Craig Ringer is correct. Here's a little more info for Postgres 9.1 and later…
Is Extension Available?
You can only install an extension if it has already been built for your Postgres installation (your cluster in Postgres lingo). For example, I found the uuid-ossp extension included as part of the installer for Mac OS X kindly provided by EnterpriseDB.com. Any of a few dozen extensions may be available.
To see if the uuid-ossp extension is available in your Postgres cluster, run this SQL to query the pg_available_extensions system catalog:
SELECT * FROM pg_available_extensions;
Install Extension
To install that UUID-related extension, use the CREATE EXTENSION command as seen in this this SQL:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Beware: I found the QUOTATION MARK characters around extension name to be required, despite documentation to the contrary.
The SQL standards committee or Postgres team chose an odd name for that command. To my mind, they should have chosen something like "INSTALL EXTENSION" or "USE EXTENSION".
Verify Installation
You can verify the extension was successfully installed in the desired database by running this SQL to query the pg_extension system catalog:
SELECT * FROM pg_extension;
UUID as default value
For more info, see the Question: Default value for UUID column in Postgres
The Old Way
The information above uses the new Extensions feature added to Postgres 9.1. In previous versions, we had to find and run a script in a .sql file. The Extensions feature was added to make installation easier, trading a bit more work for the creator of an extension for less work on the part of the user/consumer of the extension. See my blog post for more discussion.
Types of UUIDs
By the way, the code in the Question calls the function uuid_generate_v4(). This generates a type known as Version 4 where nearly all of the 128 bits are randomly generated. While this is fine for limited use on smaller set of rows, if you want to virtually eliminate any possibility of collision, use another "version" of UUID.
For example, the original Version 1 combines the MAC address of the host computer with the current date-time and an arbitrary number, the chance of collisions is practically nil.
For more discussion, see my Answer on related Question.
pgcrypto Extension
As of Postgres 9.4, the pgcrypto module includes the gen_random_uuid() function. This function generates one of the random-number based Version 4 type of UUID.
Get contrib modules, if not already available.
sudo apt-get install postgresql-contrib-9.4
Use pgcrypto module.
CREATE EXTENSION "pgcrypto";
The gen_random_uuid() function should now available;
Example usage.
INSERT INTO items VALUES( gen_random_uuid(), 54.321, 31, 'desc 1', 31.94 ) ;
Quote from Postgres doc on uuid-ossp module.
Note: If you only need randomly-generated (version 4) UUIDs, consider using the gen_random_uuid() function from the pgcrypto module instead.
Update from 2021,
There is no need for a fancy trick to auto generate uuid on insert statement.
Just do one thing:
Set default value of DEFAULT gen_random_uuid () to your uuid column.
That is all.
Say, you have a table like this:
CREATE TABLE table_name (
unique_id UUID DEFAULT gen_random_uuid (),
first_name VARCHAR NOT NULL,
last_name VARCHAR NOT NULL,
email VARCHAR NOT NULL,
phone VARCHAR,
PRIMARY KEY (unique_id)
);
Now you need NOT to do anything to auto insert uuid values to unique_id column. Because you already defined a default value for it. You can simply focus on inserting onto other columns, and postgresql takes care of your unique_id. Here is a sample insert statement:
INSERT INTO table_name (first_name, last_name, email, phone)
VALUES (
'Beki',
'Otaev',
'beki#bekhruz.com',
'123-456-123'
)
Notice there is no inserting into unique_id as it is already taken care of.
About other extensions like uuid-ossp, you can bring them on if you are not satisfied with postgres's standard gen_random_uuid () function. Most of the time, you should be fine without them on
ALTER TABLE table_name ALTER COLUMN id SET DEFAULT uuid_in((md5((random())::text))::cstring);
After reading #ZuzEL's answer, i used the above code as the default value of the column id and it's working fine.
The uuid-ossp module provides functions to generate universally unique identifiers (UUIDs)
uuid_generate_v1() This function generates a version 1 UUID.
Add Extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Verify Extension
SELECT * FROM pg_extension;
Run Query
INSERT INTO table_name(id, column1, column2 , column3, ...) VALUES
(uuid_generate_v1(), value1, value2, value3...);
Verify table data
SELECT uuid_generate_v5(uuid_ns_url (), 'test');

Database replication from SQL Server 2000 to PostgreSQL

We are SQL Server users and recently we have one database on PostgreSQL. For consistency purpose we are replication database on SQL Server 2000 to other database on SQL Server 2000 and now we would also need to replicate it to the database on PostgreSQL. We were able to do that using ODBC and Linked Server. We created an ODBC DSN for database on PostgreSQL and using that DSN we created a Linked Server on SQL Server. We were able to replicate tables from SQL Server database to that linked server and hence to PostgreSQL database successfully. Now the issue faced is while replication, the datatype bit, numeric(12,2) and decimal(12,2) are converted to character(1), character(40) and character(40) respectively. Is there any solution on how to retain those data types in PostgreSQL database ? I mean the bit should become boolean, and numeric and decimal data type should remain as it is in the replicated table of postgresql. We are using PostgreSQL 9.x
SQL Server table,
CREATE TABLE tmtbl
(
id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
Code varchar(15),
booleancol bit,
numericcol numeric(10, 2),
decimalcol decimal(10, 2)
)
after being replicated to PostgreSQL it becomes,
CREATE TABLE tmtbl
(
id integer,
"Code" character varying(15),
booleancol character(1),
numericcol character(40),
decimalcol character(40),
)
Thank you very much.
Please, use:
boolean type for true/false type of columns (there's no bit type in postgres);
NUMERIC type exists also in the PostgreSQL (according to the SQL standard). But I suggest you should better use real PostgreSQL type, as it will be working faster.
I recommend you to create target table on the PostgreSQL side manually, specifying proper field types, as ODBC+Linked Server combination is not doing it's job properly.
You can always consult this part of the official documentation for existing data types.
have you heard of Foreign Data Wrappers?
http://wiki.postgresql.org/wiki/Foreign_data_wrappers

How to change Oracle 10gr2 express edition's default character set

I installed oracle 10gr2 express edition on my laptop.
when I import a .dmp file which is generated by oracle 10gr2 enterprise edition, error occurs.
the database server which generated the .dmp file is running with GBK charset, but my oracle express server is running with UTF-8.
SQL> select userenv('language') from dual;
USERENV('LANGUAGE')
--------------------------------------------------------------------------------
SIMPLIFIED CHINESE_CHINA.AL32UTF8
how can I configure my own oracle server to import the .dmp file?
edit ---------------------------------------------------
my own oracle express server:
SQL> select * from v$nls_parameters where parameter like '%CHARACTERSET';
PARAMETER
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
NLS_CHARACTERSET
AL32UTF8
NLS_NCHAR_CHARACTERSET
AL16UTF16
The new character set requires up to 4 bytes per character while the old one only required up to 2 bytes. So due to the character set change, some character fields will require more space than before. Obviously, some of them have now hit the column lenght limit.
To resolve it, you'll have to increase the length of the affected columns or change the length semantics so the length is interpreted in characters (and not in bytes, which is the default).
If your dump file contains both the schema definition and the data, you'll have to work in phases: first import the schema only, the increase the column lengths and finally import the data.
I have no experience with the length semantics. I usually specify it explicit. See the documentation about the NLS_LENGTH_SEMANTICS parameter for information. It affects how the number 100 in the following statement is interpreted:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100)
);
Usually, it's better to be explicit and specify the unit directly:
CREATE TABLE example (
id NUMBER,
name VARCHAR(100 CHAR)
);
The dump file contains a whole schema, alter column length is not a good option for me.
the Oracle Express edition use UTF-8 as default, after googled the web, I found a way to alter the database character set.
in my case:
UTF-8 --> GBK
I connected with user sys as sysdba in sqlplus. then executed following commands:
shutdown immediate
startup mount
alter system enable restricted session ;
alter system set JOB_QUEUE_PROCESSES=0;
alter system set AQ_TM_PROCESSES=0;
alter database open;
alter database character set internal_use ZHS16GBK ;
shutdown immediate
startup
I don't know what these commands done to my database, but It works.

Creating Multiple table in Oracle

I am using Oracle Express 10g and I'm enter the following text to create 2 tables in the sql command line, but it is not working.
CREATE TABLE student (
matric_no VARCHAR2(8),
first_name VARCHAR2(20),
last_name VARCHAR2(20),
date_of_birth DATE
);
CREATE TABLE student1 (
matric_no VARCHAR2(8),
first_name VARCHAR2(20),
last_name VARCHAR2(20),
date_of_birth DATE
);
Can anyone see what I am doing wrong.
Thanks
By "command line" you probably mean the web application that comes with Oracle Express 10g. This application has several browser incompatibilities and is basically unable to execute several statements at once (also see Oracle 10g - invalid character on DB importing).
Either put your statements in a text file and upload them as a SQL script. Or switch to a better tool such as SQL Developer (downloadble from Oracle web site).
Are you sure you didn't type this out in WORD?
Sometimes there are problems with "invisible" characters. For example if you hit TAB in WORD, it will store it as a special character which will thereby cause an error when you try running it in SQLPlus.