UUID instead of GUID (Oracle to Postgres) - postgresql

Is there any chance to have duplicate hexadecimal value while migrating oracle to Postgres (guid to UUID)
I have a table having column which is generate 32 character hexadecimal value by using sys_guid function (default oracle sys object). Now I want to migrate Oracle to Postgres but guid is not supported by Postgres but it has UUID instead of guid. Just want to confirm that whether there will be any chance to have duplicate value after migrating oracle to Postgres

GUID is a synonym for UUID.
The UUIDs generated by Oracle can be stored without problems in a column defined as uuid in Postgres.
Postgres provides gen_random_uuid() that will generate a V4 UUID.

Related

Postgres not converting from a java boolean to numeric data column automatically?

I have an Oracle database that is currently being migrated to an AWS Aurora Postgres database. As part of this I have had to update some of the datatypes in our sql files for creating tables e.g. changing NUMBER to NUMERIC etc. It has been going well but I am now coming across the following error when trying to insert some data into a table using spring data/JPA
ERROR: column "some_table_column" is of type numeric but expression is of type boolean
I can identify the issue is probably that the java class defines this column like
#Column(name = "SOME_TABLE_COLUMN", nullable = false)
private boolean someTableColumn;
while the sql file defines it like
CREATE TABLE MY_TABLE
(
SOME_TABLE_COLUMN NUMERIC(1) DEFAULT 0 NOT NULL,
etc..
Don't ask why it has been defined like this, I didn't write this code, I've just inherited it. But this has not been an issue ever with the Oracle database, and I'm only seeing the error now when running against Postgres. Does Oracle do some sort of implict conversion behind the scene that Postgres doesn't do? Is there a way I can resolve this without having to migrate the column to a boolean datatype?

Strange query by oracle_fdw

I am using oracle_fdw 2.2.0devel, PostgreSQL 10.13, Oracle client 18.3.0.0.0
We have a foreign table in Postgres defined as this:
CREATE FOREIGN TABLE public.tickers
(
ticker_id INTEGER,
ticker VARCHAR,
)
SERVER oracle
OPTIONS (table 'TICKERS', schema 'COMMENTARY', readonly 'true');
This is connecting to as 12c SE database. This works fine, however, I've noticed that the query in Oracle is actually looking like this:
SELECT
/*618157932326e692807010156f98ddac*/
r2."TICKER_ID",
r2."TICKER"
FROM "COMMENTARY"."TICKERS" r2
WHERE (upper(r2."TICKER") = upper(:p1))
Why would it automatically be adding the "UPPER" clause? This slows the Oracle query and does not use an index, unless I create a FBI using "upper".
Was wondering if there was some option I'm supposed to disable.......
The only way that oracle_fdw will generate an Oracle query that uses the upper function is if the original PostgreSQL query already had upper in it.

Migrating from SQL Server to Aurora PostgreSQL where encountering GUID, VARCHAR, UUID issues

I'm seeking some advice.
I've migrated a database from SQL Server to Aurora PostgreSQL using AWS DMS. In most of the tables in SQL Server, the primary keys are a uniqueidentifier (GUID). When migrated to Postgres these columns are converted to VARCHAR(36). This seems to be as expected, per the AWS DMS documentation.
In our .NET application, we use Entity Framework 6, which I have added a new dbContext to use the npgsql provider. Note that we are still keeping existing SQL Server EF6 providers. Essentially, the application will use both SQL Server and PostgreSQL. This is all hooked up fine.
Where I run into some issues is when my Postgres context is making fetches to the PostgreSQL database, it encounters a lot of errors
Npgsql.PostgresException: 42883: operator does not exist: character varying = uuid
I understand the issue, where the application using EF makes a fetch by Id (GUID), and the Postgres table has an Id that is VARCHAR type...
My feeling is the problem is not on the application or EF side, rather the column on the table should be something like a UUID. Which I can do, on post migration, I can simply alter the column to become a UUID type, but is this the way, and will it resolve my issues? I also feel like this can't be a unique case I'm dealing with; seems like a common issue for anyone also migrating a .NET app from SQL Server to PostgreSQL...
I look forward to hearing some of your ideas, comments, thoughts on this. Thanks in advance.
It seems that this migration procedure is not quite up to the task, as a GUID (which is Microsoft's confusing term for UUID) should be migrated to uuid. Not only would you save 21 bytes of storage space per row, but you also wouldn't have this problem.
It seems that your application is comparing a uuid value with one of the migrated varchars:
WHERE uniqueidentifier = UUID '87595807-3157-4a81-ac89-3e09e83c0c0a'
You have to add an explicit cast, like the error message says:
WHERE uniqueidentifier = CAST (UUID '87595807-3157-4a81-ac89-3e09e83c0c0a' AS text)
You would cast to text, not to varchar, because there is no equality operator for varchar. varchar is coerced to text when you compare it, because the storage for these types is identical.

ORA-12899 - value too large for column when upgrading to Oracle 12C

My project is going through a tech upgrade so we are upgrading Oracle DB from 11g to 12c. SAP DataServices is upgraded to version 14.2.7.1156.
The tables in Oracle 12C is defaulted to varchar (byte) when it shoud be varchar (char). I understand this is normal. So, I altered the session for each datastore running
`ALTER session SET nls_length_semantics=CHAR;`
When I create a new table, with varchar (1), I am able to load unicode characters like Chinese characters (i.e 东) into the new table from Oracle.
However, when I try to load the same unicode character via SAPDS into the same table, it throws me an error 'ORA-12899 - value too large for column'. My datastore settings are:
Locale
Language: default
Code Page: utf-8
Server code page: utf-8
Additional session parameters:
ALTER session SET nls_length_semantics=CHAR
I would really appreciate to know what settings I need to change in my SAP BODS since my Oracle seems to be working fine.
I think, you should consider modifying table column from varchar2(x BYTE) to varchar2(x CHAR) to allow Unicode (UTF-8 format) data and avoid ORA-12899 .
create table test1 (name varchar2(100));
insert into test1 values ('east');
insert into test1 values ('东');
alter table test1 modify name varchar2(100 char);
-- You can check 'char_used' for each column like -
select column_name, data_type, char_used from user_tab_columns where table_name='TEST1';

Database replication from SQL Server 2000 to PostgreSQL

We are SQL Server users and recently we have one database on PostgreSQL. For consistency purpose we are replication database on SQL Server 2000 to other database on SQL Server 2000 and now we would also need to replicate it to the database on PostgreSQL. We were able to do that using ODBC and Linked Server. We created an ODBC DSN for database on PostgreSQL and using that DSN we created a Linked Server on SQL Server. We were able to replicate tables from SQL Server database to that linked server and hence to PostgreSQL database successfully. Now the issue faced is while replication, the datatype bit, numeric(12,2) and decimal(12,2) are converted to character(1), character(40) and character(40) respectively. Is there any solution on how to retain those data types in PostgreSQL database ? I mean the bit should become boolean, and numeric and decimal data type should remain as it is in the replicated table of postgresql. We are using PostgreSQL 9.x
SQL Server table,
CREATE TABLE tmtbl
(
id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
Code varchar(15),
booleancol bit,
numericcol numeric(10, 2),
decimalcol decimal(10, 2)
)
after being replicated to PostgreSQL it becomes,
CREATE TABLE tmtbl
(
id integer,
"Code" character varying(15),
booleancol character(1),
numericcol character(40),
decimalcol character(40),
)
Thank you very much.
Please, use:
boolean type for true/false type of columns (there's no bit type in postgres);
NUMERIC type exists also in the PostgreSQL (according to the SQL standard). But I suggest you should better use real PostgreSQL type, as it will be working faster.
I recommend you to create target table on the PostgreSQL side manually, specifying proper field types, as ODBC+Linked Server combination is not doing it's job properly.
You can always consult this part of the official documentation for existing data types.
have you heard of Foreign Data Wrappers?
http://wiki.postgresql.org/wiki/Foreign_data_wrappers