Postgres not converting from a java boolean to numeric data column automatically? - postgresql

I have an Oracle database that is currently being migrated to an AWS Aurora Postgres database. As part of this I have had to update some of the datatypes in our sql files for creating tables e.g. changing NUMBER to NUMERIC etc. It has been going well but I am now coming across the following error when trying to insert some data into a table using spring data/JPA
ERROR: column "some_table_column" is of type numeric but expression is of type boolean
I can identify the issue is probably that the java class defines this column like
#Column(name = "SOME_TABLE_COLUMN", nullable = false)
private boolean someTableColumn;
while the sql file defines it like
CREATE TABLE MY_TABLE
(
SOME_TABLE_COLUMN NUMERIC(1) DEFAULT 0 NOT NULL,
etc..
Don't ask why it has been defined like this, I didn't write this code, I've just inherited it. But this has not been an issue ever with the Oracle database, and I'm only seeing the error now when running against Postgres. Does Oracle do some sort of implict conversion behind the scene that Postgres doesn't do? Is there a way I can resolve this without having to migrate the column to a boolean datatype?

Related

Ora2PG REPLACE_AS_BOOLEAN property and exclude one column by replacing

I'm using ora2pg to import an oracle db schema into a postgresql db schema. I configured all in the correct way and I'm able to dump the oracle dn into the postgresql db.
In the schema that I'm converting I have some columns number(1,0) that I need to convert as boolean in the pg schema.
At first I used this configuration
REPLACE_AS_BOOLEAN NUMBER:1
so every column with this type will be conmverted as boolean in the pg db.
The problem is that I have a column in the oracle schema defined as number(1,0). This column has to remain numeric and maintain the same type on the pg schema, so it hasn't to be converted as boolean.
This means that i changed the property in this manner
REPLACE_AS_BOOLEAN TABLE1:COLUMN1 TABLE2:COLUMN2 TABLE3:COLUMN3
I have a lot of columns that they have to be converted as boolean and the definition of this property became very long.
Is there a method to define the REPLACE_AS_BOOLEAN property to replace all the column with type number(1,0), but with some exception for one or some of them?
I had to wrote the property with the list of all the tables name and columns name

Migrating from SQL Server to Aurora PostgreSQL where encountering GUID, VARCHAR, UUID issues

I'm seeking some advice.
I've migrated a database from SQL Server to Aurora PostgreSQL using AWS DMS. In most of the tables in SQL Server, the primary keys are a uniqueidentifier (GUID). When migrated to Postgres these columns are converted to VARCHAR(36). This seems to be as expected, per the AWS DMS documentation.
In our .NET application, we use Entity Framework 6, which I have added a new dbContext to use the npgsql provider. Note that we are still keeping existing SQL Server EF6 providers. Essentially, the application will use both SQL Server and PostgreSQL. This is all hooked up fine.
Where I run into some issues is when my Postgres context is making fetches to the PostgreSQL database, it encounters a lot of errors
Npgsql.PostgresException: 42883: operator does not exist: character varying = uuid
I understand the issue, where the application using EF makes a fetch by Id (GUID), and the Postgres table has an Id that is VARCHAR type...
My feeling is the problem is not on the application or EF side, rather the column on the table should be something like a UUID. Which I can do, on post migration, I can simply alter the column to become a UUID type, but is this the way, and will it resolve my issues? I also feel like this can't be a unique case I'm dealing with; seems like a common issue for anyone also migrating a .NET app from SQL Server to PostgreSQL...
I look forward to hearing some of your ideas, comments, thoughts on this. Thanks in advance.
It seems that this migration procedure is not quite up to the task, as a GUID (which is Microsoft's confusing term for UUID) should be migrated to uuid. Not only would you save 21 bytes of storage space per row, but you also wouldn't have this problem.
It seems that your application is comparing a uuid value with one of the migrated varchars:
WHERE uniqueidentifier = UUID '87595807-3157-4a81-ac89-3e09e83c0c0a'
You have to add an explicit cast, like the error message says:
WHERE uniqueidentifier = CAST (UUID '87595807-3157-4a81-ac89-3e09e83c0c0a' AS text)
You would cast to text, not to varchar, because there is no equality operator for varchar. varchar is coerced to text when you compare it, because the storage for these types is identical.

Npgsql.PostgresException: Column cannot be cast automatically to type bytea

Using EF-Core for PostgresSQL, I have an entity with a field of type byte but decided to change it to type byte[]. But when I do migrations, on applying the migration file generated, it threw the following exception:
Npgsql.PostgresException (0x80004005): 42804: column "Logo" cannot be
cast automatically to type bytea
I have searched the internet for a solution but all I saw were similar problems with other datatypes and not byte array. Please help.
The error says exactly what is happening... In some cases PostgreSQL allows for column type changes (e.g. int -> bigint), but in many cases where such a change is non-trivial or potentially destructive, it refuses to do so automatically. In this specific case, this happens because Npgsql maps your CLR byte field as PostgreSQL smallint (a 2-byte field), since PostgreSQL lacks a 1-byte data field. So PostgreSQL refuses to cast from smallint to bytea, which makes sense.
However, you can still do a migration by writing the data conversion yourself, from smallint to bytea. To do so, edit the generated migration, find the ALTER COLUMN ... ALTER TYPE statement and add a USING clause. As the PostgreSQL docs say, this allows you to provide the new value for the column based on the existing column (or even other columns). Specifically for converting an int (or smallint) to a bytea, use the following:
ALTER TABLE tab ALTER COLUMN col TYPE BYTEA USING set_bytea(E'0', 0, col);
If your existing column happens to contain more than a single byte (should not be an issue for you), it should get truncated. Obviously test the data coming out of this carefully.

Database replication from SQL Server 2000 to PostgreSQL

We are SQL Server users and recently we have one database on PostgreSQL. For consistency purpose we are replication database on SQL Server 2000 to other database on SQL Server 2000 and now we would also need to replicate it to the database on PostgreSQL. We were able to do that using ODBC and Linked Server. We created an ODBC DSN for database on PostgreSQL and using that DSN we created a Linked Server on SQL Server. We were able to replicate tables from SQL Server database to that linked server and hence to PostgreSQL database successfully. Now the issue faced is while replication, the datatype bit, numeric(12,2) and decimal(12,2) are converted to character(1), character(40) and character(40) respectively. Is there any solution on how to retain those data types in PostgreSQL database ? I mean the bit should become boolean, and numeric and decimal data type should remain as it is in the replicated table of postgresql. We are using PostgreSQL 9.x
SQL Server table,
CREATE TABLE tmtbl
(
id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
Code varchar(15),
booleancol bit,
numericcol numeric(10, 2),
decimalcol decimal(10, 2)
)
after being replicated to PostgreSQL it becomes,
CREATE TABLE tmtbl
(
id integer,
"Code" character varying(15),
booleancol character(1),
numericcol character(40),
decimalcol character(40),
)
Thank you very much.
Please, use:
boolean type for true/false type of columns (there's no bit type in postgres);
NUMERIC type exists also in the PostgreSQL (according to the SQL standard). But I suggest you should better use real PostgreSQL type, as it will be working faster.
I recommend you to create target table on the PostgreSQL side manually, specifying proper field types, as ODBC+Linked Server combination is not doing it's job properly.
You can always consult this part of the official documentation for existing data types.
have you heard of Foreign Data Wrappers?
http://wiki.postgresql.org/wiki/Foreign_data_wrappers

DB2 Character Datatype and JPA Not Working

I am working with DB2 Universal database having lots of tables with columns of datatype CHARACTER. Length of these columns is variable (greater than 1 e.g. 18). As i execute a Native query using JPA it selects only first character of the column. Seems CHARACTER datatype is mapped to Java Character.
How can i get full contents in DB column. I can not change the database bening on Vendor side. Please note i need to do it both ways, i.e. :
Using JPQL ( is attribute columnDefinition can work in this case)
Using native DB query (no pojo is used in this case and i have no control over datatypes)
i am using Hibernate implementation of JPA provided by spring.
If these columns are actually common in your database, you can customize a dialect used by Hibernate. See comments to HHH-2304.
I was able to cast the column to VARCHAR to produce padded String results from createNativeQuery:
select VARCHAR(char_col) from schema.tablename where id=:id